Yii2 Swiftmailer 0 Auth exception

If you get the error Message: Failed to authenticate on SMTP server with username “****” using 0 possible authenticators

Then try to remove the username and password from the configuration file.


When using the Swiftmailer, a common PHP emailer
This example specifically talks about the Yii2 configuration file, but likely applies to other frameworks.

Here’s an example of the offending config

config/web.php (or console.php or a common.php file if you merge the two).

'components' => [
  'mailer' => [
    'class' => 'yii\swiftmailer\Mailer',
    'transport' => [
        'class' => 'Swift_SmtpTransport',
        'plugins' => [
            ['class' => 'Openbuildings\Swiftmailer\CssInlinerPlugin']
        "username" => "smtp-auth-user",
        "password" => "*****",
        "host" => 'exchange.local',
        "port" => 25,

The exception seen was

Message: Failed to authenticate on SMTP server with username “….” using 0 possible authenticators

This exception caused major headache.

After investigation it turned out that removing the username and password from the transport caused it to work.

It seems that the server we were on was in a corporate environment and SMTP authentication was disabled but Swiftmailer was trying to authenticate and failed.

Bonus – Enabling SMTP Logging

'components' => [
'mailer' => [
'class' => 'yii\swiftmailer\Mailer',
'enableSwiftMailerLogging' => true,
'transport' => [
'class' => 'Swift_SmtpTransport',
"host" => 'localhost',
"port" => 25,
'log' => [
'traceLevel' => YII_DEBUG ? 3 : 0,
'targets' => [
'class' => 'yii\log\FileTarget',
'levels' => ['error', 'warning'],
// Log the emails
'class' => 'yii\log\FileTarget',
'categories' => ['yii\swiftmailer\Logger::add'],
'logFile' => '@app/runtime/logs/email.log',


With the above config you should now see detailed logs in the runtime/logs/email.log file.

Sabby Love

I’m a bit of a night owl but she takes it to a whole new level, sleeping most of the day.
So when she started staying over we got very little sleep. I was exhausted. But we are both in a good rhythm now.
I love the way we’ll seek each other and curl up next to each other. She’ll fall asleep in my arms. Other times she’ll caress my feet. Sometimes she’ll bite and scratch a bit when she’s feeling that way inclined. It’s not my thing, but to each their own.
I met her some time ago but we met again through the same mutual friend who looks after her Mum and we’ve been together for some months now. She’s black, which is new for me, although she has some white hairs that really stand out and I occasionally pluck out.
I love her.
Her eating habits leave a lot to be desired. Like many Vietnamese, they don’t put their litter in the bin but on the floor so I often have to sweep up afterwards.
We’ve already been through a lot. When she first arrived she was a scaredy cat, especially afraid of the rain and thunder. The roof here does make the rain extra loud 🔊  but now she is fine and can sleep through it.
It’s hard to have any privacy with her around. She’ll often come into the bathroom whilst I’m sitting on the toilet. Yet she keeps away when I shower.

We’ve watched movies together and fought off the flies, moths and bugs that often try to attack in night. She loves it when I sweep up. Did didn’t always like me on the computer too long, but now she enjoys it because she’s learnt how to be with me.

I love watching her play and enjoy life. She’s so cute.
Although she’s also invasive. She’ll check everything and go through my stuff given half a chance and sometimes destroys things. But she’s curious, not malicious.
But. I recently learnt that she’s not a she.
It turns out that Sabby, my cat, is a boy. Both Mrs Loan and myself misinterpreted Sabby’s gender. Apparently that’s pretty easy to do when the cat is young.
Sabby is a mostly black Bombay Cat whom I love dearly.
I was going to post this some time ago but wanted to make a nice collage of images. I ran out of time then, but in 15 minutes time Sabby is going away, back to his home.
See, I’ve been living in Vietnam for nearly 7 months now, but it’s time to return to Australia and then, well I’m not sure where I’m going next but I know it’s to be with my girlfriend Jen.

Using jq to update the contents of certain JSON fields

OK, I’ll be brief.


I created a set of API docs with Apiary using Markdown format.

We needed to change over to Postman, so I used Apimatic for the conversion. Which was 99% great, except for the item descriptions it only did a single line break, not two line breaks. As Postman is reading the description as Markdown a single line break doesn’t actually create a new line.


So, I needed to replace the string \n with \n\n but the key is I only needed to do it on the description field.

Ohh and I needed to add an x-api-key to use the mock server. Even Postman’s own authorisation system didn’t seem to easily support this.

Using the incredibly useful comment by NathanNorman on this GitHub Postman issue I had a glimpse of what I could do.


So to add in the x-api-key into the  Postman headers, on my Linux VM I ran the following on the terminal:

jq ‘walk(if (type == “object” and has(“header”)) then .header |= (. + [{“key”:“x-api-key”, “value”:“{{apiKey}}”}] | unique) else . end )’ postman_api.json > postman_api_apiHeader.json


I then checked some resources, learnt about the |= update operator and gsub for replacement.

So to replace \n with \n\n in just the description fields I ended up with:

cat postman_api_apiHeader.json | jq ‘walk( if (type == “object” and has (“description”) ) then .description |= gsub( “\\n”; “\n\n”) else . end )’ > postman_api_apiHeader_description.json


If you want to see a list of the updated description fields to make sure it worked you can pipe the results to jq again.

cat postman_api_apiHeader.json | jq ‘walk( if (type == “object” and has (“description”) ) then .description |= gsub( “\\n”; “\n\n”) else . end )’ | jq ‘..|.description?’

Hopefully that helps others, or myself in the future.


Note that I downloaded the latest version of jq in order to run this. The debian distros are only using version 1.5 but I needed v1.6 for the walk function, but it’s a pretty easy download.

Some resources:

https://stedolan.github.io/jq/ Official jq site

https://stedolan.github.io/jq/manual/#walk(f) – Official docs description of the Walk function in jq.

https://remysharp.com/drafts/jq-recipes – Some jq recipes

https://github.com/postmanlabs/postman-app-support/issues/4044 – The Github issue that got me down this path

https://www.apimatic.io/transformer – The very powerful API blueprint online conversion system. Allowing me to upload a Markdown style Apiary file and download a Postman and also Swagger .json files.



Fake Taxi’s in Vietnam

I’m writing this because I paid 10x the price I should have for a fairly short taxi ride.

This is based on my one bad experience after months of living in Vietnam.

Firstly. If you are going to get in a taxi you should have the Grab app. It’s their equivalent to Uber. Even if you don’t want to, or for some reason can’t order a Grab, look at the suggested price. The taxi prices are usually within 30% of that price, actually they are usually almost identical.
It might be a little different for longer trips, but usually you have to be a hotel to negotiate good prices to the airport that are better than the Grab or normal taxi rates.

So Grab gives you a good reference point price.

The Meter

The second thing to look at is the meter.

The fake Taxi meter looked like this

A fake Taxi Meter
A fake Taxi Meter

See it’s just got a single display field. The price.

The real Taxi meters look like this

A proper taxi meter.
A proper taxi meter.

The top left most number is the price

Another photo of a proper taxi meter (with flash so you can see the front face better)
Another photo of a proper taxi meter (with flash so you can see the front face better)

See the 4 different fields. They don’t just show a price. There’s Fare, Time, Distance and Unit Price. There’s also 5 buttons on it.

For the fake taxi trip I traveled for less than 5 minutes, just over 2km and paid $420,000 VND (Vietnamese Dong) or about AUD$25. It should have cost 40k not 450k.

For the legit taxi ride as a reference when I had travelled for 10km it showed 121k VND.

The fake taxi itself looked fairly legit from the brief walk up to it I did, it had been painted green and had a thing on top. He was positioned as if he just dropped a passenger off.

If you think you are being scammed then get the taxi to take you to your hotel or even just a Circle K / 7 eleven equivalent and say you don’t have the money but you’ll get it. Go and talk to the people at the hotel or store and let them know the price the Taxi is asking. See if they think it’s reasonable. If not, they can usually help out.

If language is an issue, use the Google Translate App. It’ll do conversation mode if it has Internet access and as simcards are cheap and you can buy them from the airport, this shouldn’t be an issue. Otherwise be prepared and have the Vietnamese dictionary downloaded for offline use in the app and a Vietnamese keyboard option on your phone. With TouchPal I just swipe on my space bar to change the language.

My story

I got done by the fake taxi because I’d used up my mobile phone data on the train. I have been living and teaching English in a rural village in Nghi Loc. It’s a 7 hour train ride from the nearby train station in Vinh city to Hanoi. I was going there to meet my girlfriend who was flying in from overseas.

I’ve done the trip to and from Hanoi a couple of times, but that was some months ago and I have managed to spend very little money in the meantime. My food and accomodation being covered by the BlueSky English Language Center. As such I wasn’t used to the prices.

I walked out of the train station and was asked by someone of I wanted a Taxi, I fobbed him off and instead headed to the toilet with my heavy bags.

It was around 8pm by now and after buying some food I realised I couldn’t order a Grab because of the lack of data. I should’ve got a recharge at the Circle K I was just at, but didn’t know that was an option. There’s no free Wi-Fi on the train or at the train station, not that I could find. Instead, carrying heavy camera gear and clothes for filming a wedding, I went looking for a taxi. It was a short trip to the Hanoi Lotus Hostel where I was staying. I figured that the difference between a Taxi and Grab would be barely anything.

I saw someone standing by a legit looking Taxi. The sun had set half an hour ago so it was dark, but it looked like a painted, proper Taxi with the 🚕 augment to the roof.
I got in, showing the business card of the place I was staying and he drove me to the intersection about 10m away and pointed in the wrong direction as if he thought I’d asked to be dropped off at a different location and he was stopped between the two. No big deal I thought.

The meter said 420 and I assumed the dot wasn’t displaying and it meant 42.0 or 42,000 Vietnamese Dong. About the right price I’d expected. I gave him $50k VND expecting change and he said no, 420k.

I was tired and confused. When did Hanoi get so expensive, that was more than the 7 hour train ride I’d just done.

But haven’t I paid a lot for a taxi trip before? Yeah. OK. Whatever, I’m so close, I just want to get to the hostel.

I gave him a 500k note. He gave me 20k change. Woo I said, where’s the rest. I wasn’t THAT out of it. He nearly stiffed me on the change as well, which is more than the price of a legit trip. In hindsight the slight nervous reaction I had here showed he was probably worried the gig was up. Unfortunately I got out and it wasn’t until about a minute later realised I’d just paid the most expensive taxi ride I’ve done in Vietnam, which was also my shortest trip.

So at the critical moment, when going to pay the System 1, fast processing part of my brain remembered that I’ve paid a lot for a taxi ride before, It remembered that it’s some multiple of the train ride.
But, what it didn’t remember was the specifics. The taxi ride was from Vinh city to the rural area in Nghi Loc where I teach and live. That takes nearly 30 mins and costs 250k VND. Expensive yes, but the train ticket still cost about twice that. Or the taxi is about 0.5x the cost of a VIP sleeper trip.
So my brain got the orders of magnitude out of whack and I didn’t think about it properly. I also didn’t expect to get gauged so badly.

Hopefully after reading this you now have a better chance of not being ripped off.

Some other notes

  • It only cost me 25k VND to get from the Old Quarters area to the Hanoi Railway station.
    Also, the railway station isn’t the easiest to find in the Grab search. Try “Ga Ha Noi” as terms like Hanoi Railway station or Hanoi train station don’t really work (not when I tried on the 31st of Oct 2018).

    Example Grab ride to Ga Ha Noi — The Railway station
    Example Grab ride to Ga Ha Noi — The Railway station
  • Expect it to cost 250k VND from the Old Quarters area to the Hanoi Airport. It’s a long trip with sections that can go at 80km/hr. Those are faster by car, not bike. Unless traffic is really bad getting out of the rabbit warren that is Hanoi. But trust me, don’t do the whole trip by bike. You won’t enjoy it.

    When searching for the International Terminal at Hanoi, scroll down past the Domestic ones.
    When searching for the International Terminal at Hanoi, scroll down past the Domestic ones.
  • The Hanoi Airport International terminal is listed below the domestic terminals in Grab. If you find yourself at the Domestic Terminal T1 (A or B) and need to get to T2, it’s about 2km away to the west section of the airport, not easily shown on Google Maps unless you go into satellite view. The taxi’s won’t want to take you such a short distance. From the T1 building which is Domestic Departures you head West and see the Domestic Arrivals (E), head down and go out the airport and around to the International Terminal. It took me over half an hour to work that out in a frantic hurry.

    Hanoi Airport Layout
    Hanoi Airport Layout

Stock Footage Intro

What is it?
Stock Footage is often termed generic or B roll footage. It’s used to fill in shots. Maybe a movie or TV show needs a cinematic fly over of New York city, or a corporate video needs a shot that means “Enthusiasm”. Or just footage of a dump truck, rice field, person at the top of a mountain, you get the idea.
A lot of the Zeitgeist films used stock footage. A lot of the Jay Shetty videos is stock.
Where can you buy/sell it from
There’s a lot of different stock websites.
There’s probably other drone specific websites as well that have sprung up recently.
General Information
Stock footage is almost always 5s to a max of 1 minute long, it’s without audio and is best in 4k resolution although 1080p is also accepted.
There’s some great options, from people, to nature, also things like slow-mo, timelapse, drone footage, 360° and more.
The main issues to consider when filming stock footage is the requirements for no visible branding, shots that have meaning and the need for Model and property release forms.
No Branding
If you film a person up close and their clothing has brand logos on it, you can’t sell that footage commercially. It’s considered distracting, but also there’s trademark and copyright issues. The same is the case for filming a generic shot of a shopping mall, there’s brand logos all over the place.
Even focusing too close on a single car can cause problems and the footage will be rejected.
I think this is generally a lot less of a problem with drone footage.
If you fail this you’ll likely be rejected due to visible trademark.
Model Release Forms versus Editorial Content
When filming, if there’s a person who could identify themselves from seeing the footage then that footage needs a model release form.

If you are filming in certain locations then they’ll need a property release form. I got one filled out for a theme park I filmed at. If you tried to film a cultural heritage site, or somewhere you need to buy a ticket to enter then that’s a great example of where you’ll need a property release form.

I personally use the Easy Release mobile app to help me with getting model and property release forms.
There is however a 2nd type of video. Editorial content. This isn’t for movies and the like, but instead for news organisations and can also be footage you have that advertises a specific brand, e.g Coke or Grab or Nike. Those companies or even their competitors can buy the clip if they happen to like it.
Setting the submission to editorial content is also often done for things like big groups of people, etc..
Stock Footage (and stock photos) is a long game. To make enough money to quit your day job you usually need thousands of clips and of content that people want to buy. Expect $5-30 for the sale of a decent video clip, depending on the website it’s sold from. Most clips won’t sell. There’s some stats about people getting an average of like 20c to $1/month per clip. Of course it looks more like a logarithm I’m guessing. The best 20% of clips are likely worth 80% of your income.
So there’s a stock footage website where you can’t actually buy any footage from. It’s called BlackBox and it is a syndicated submission service. You upload your footage to BlackBox, fill in the various info and then they’ll submit the content on to the other main stock footage websites.
There’s more to it than that though. Because they are already dealing with the money transfers from various sites, they can also enable things like revenue splitting.
This means that I can assign say 20 or 30% of the revenue for a clip to a friend for helping curate the content. Dealing with the release forms, adding the keywords, title and other information to the videos. This is something that’s fairly easy for them to do. I’ve already uploaded the videos, they just need a laptop and can watch the video and add the info. But I’ve got a backlog of over 100 clips I’ve not published because I haven’t done this, so having someone else do it means actually getting them out there.
You could also do a revenue sharing arrangement with say a model in the video. If you filmed a dancer or a stunt driver then you could give them 50% revenue. They’ll need an account with BlackBox though.
More resources to read
https://www.blackbox.global/faq/ The general Blackbox FAQ’s
https://www.youtube.com/watch?v=sRzHkvAPSOs – A video by Chris Hau which is basically an advert for BlackBox.
https://www.youtube.com/watch?v=ZmQb6bRgkNs – A beginners guide to selling stock. Contains another advert for Blackbox
https://medium.com/@jakubgorajek/how-i-earned-my-first-150k-by-selling-stock-footage-77a4fa0ad1d2 – Info about how one guy managed to make a resonable amount of money selling stock. Now he runs his own stock website.
https://stockbynumbers.com/2017-stock-footage-sales-report/ – A much more realistic version of stock footage sales. He earned $6,675 in 13 months. I love his aerial shot of the Kayakers going down the rapids. Great drone footage! Netted him about $700 in sales.
The 9 tips are:
1. Keep It Quality
2. Pick a Niche
3. Include Actors
4. Exclude Branding
5. Submit Aggressively
6. Work Out Who Pays the Big Bucks
7. Flaunt Your Work
8. Get Your Analytics On
9. Dream Big — But Not Like, Too Big
https://submit.shutterstock.com/legal – You can download the model release forms from Shutterstock here. You’ll need a model release form for selling commercial video where people are identifiable in it. E.g where there’s 1 or 2 people walking along and you film them.
I would like to point out that as of the time of writing I’ve only had about 5 videos and about 20 photos actually published to a single stock website and haven’t made any sales in the month they’ve been up there. I have a lot more footage to add, but work full time programming websites, so don’t have the time to deal with keywords and the other stuff, hence why Blackbox is attractive to me right now.

SciFi concepts – nanoBlood, the positioning problem and remote sensing

Note: These are Michael Kubler’s personal notes regarding technology likely to exist in the future. It’s written as a reference guide for The Book of New Eden SciFi novel set in a post-scarcity (Abundance Centered) society. Eli is the main character in the novel.

nanoBlood is considered a form of bodyMod but is important enough to get it’s own category. Also known as Nanobot blood it is a generic term for a few different things.

All versions come with automated spider, snake and other venom antidotes. Also helps power most of the other bodyMods and a form of nanobots in the blood is how the neuroMod slowly gets assembled in the brain over the months.
v1 : Probes which provide vast amount of info on your body. E.g 2 year warning of a heart attack, instant detection and treatment of cancer, radiation therapy (which is why Eli needs it), etc.
v2 : Red blood replacement which provides far better oxygenation ability and CO2 absorption so you can hold your breath for nearly an hour or can exercise much better. Also has amazing toxin scrubbing and processing.
v3 : The immune system is augmented. Nanobots acting as advanced White cells, T cells etc. Allows for not just a super immune system, but wirelessly transferred anti-viruses. Very quick detection of new infections, diseases and viruses and the ability to transmit an internal scan to be remotely processed and and anti-virus quickly developed and downloaded. Some neural implants can do a basic AV processing but it takes longer and takes up almost all of the processing power. Note: The nanobots are created in nanobot factories, usually embedded in the bone marrow and a couple of other points around the body, nanites (self-replicating nanobots) are NOT used due to their possible ability to go rouge.
V4 : There’s usually more than 30% of the blood as nanoblood. It also has DNA scanning facilities which, with the help of a machine that’s a bit like an MRI, allows all the cells in your body to have their DNA read.


Firstly, there’s the nanobot sensors. These augment normal human senses. There’s some which are for touch (pressure), temperature, velocity of travel, through to searching for chemical markers, DNA readers, and ones that search for viruses, bacteria and the like.

It started with people needing replacement blood and plasma due to health reasons and artificial versions being created to fill this need.
But now there’s also nano replacement blood cells. In some cases, like red blood cells they are capable of orders of magnitude better increases in oxygen carrying, although don’t always have the full range of other functions, such as certain types of waste removal. Things like CO2 and lactic acid are also considerably amped.
There’s the immune system nanoblood defence where you can wirelessly download a new biological anti-virus set from the Internet and within minutes of someone on another planet getting a new form of cold virus, you can be actively able to fight it off.

There’s a few systems at play here.

You’ve got the honeypot traps. Cells designed to look enticing to potential bacteria and viruses, but are specially crafted so their exact makeup is known and they are heavily monitored. If their structure is altered then the foreign invader is very easy to identify and then analyse. Often cloud computing is used to convert the analysis into a new anti-viral signature. Some of this analysis includes specialised simulations ensuring that the identifying markers detected, if searched for and attacked, won’t also destroy human tissue.
It’s a major concern that you’ll download an anti-virus update that’s (intentionally or not) malformed and causes the nanobot immune system to start attacking and liquefying your own body. A form of nano-ebola, except not contagious. When a new AV is applied only small amounts are released into a test environment to ensure the anti-virus isn’t dangerous to you.

It should be noted. There’s no use of nanites. Self-replicating nanobots are heavily controlled and very carefully researched, but aren’t allowed for general use due to their potential power of exponential destruction.
Instead there’s nanobot factories, usually installed in your bones which create the nanoblood. You can of course get injected with booster shots as well, as happens to Eli when trying to deal with the radiation exposure.

Other concerns include getting through the blood brain barrier, especially important during the development of a neuroMod.
This can be done through either a border controlled tunnel, which are specially reinforced nanobot tunnels which have a basic verification check at least at one end allowing valid nanobots to go through.
Some nanobots are big and powerful enough to push their way through the barrier as they want, usually with it closing up behind them.

Checkout the previous post about neuroMod levels for more info.

One of the main functions of nanoblood is for energy delivery to the neuroMod and various bodyMods. Want a light in your finger? You need to give it power. Want to send wireless communications to a drone or satellite? You need power for that too.
NanoBlood provides the needed energy. Sometimes just by transferring ATP around, but other times using more advanced and more nanobot optimised energy solutions.
Of course, there’s only so much energy the human body can generate and store in reserve. Most people have plenty of reserves of energy for burst tasks. But there’s of course options for consumption or injection of purified energy solutions. Or you can get a witricity (wireless electricity) recharging system bodyMod.

Dealing with toxins and unwanted chemicals. Improved clotting and dramatically improved healing. These are other things you can do with the right nanoBlood setup.

NanoBlood Sensors of interest:
CO2 – A nanoBot sensor network can be more accurate than the existing human body. Letting you see CO2 levels throughout your body.
O2 – Humans are susceptible to Oxygen deprivation because we only sense amounts of CO2 in the blood.
CO or Carbon Monoxide – Because the main problem is the binding of CO to the hemoglobin this isn’t an issue in artificial red blood cells https://en.wikipedia.org/wiki/Carbon_monoxide_poisoning
Lactic Acid – Created as a byproduct of using up energy, especially during exercise.
ATP – The energy powerhouse of the body.
DNA – Being able to read the DNA of your cells. This is a very tricky endeavour and produces vast amounts of data. A few GB per cell and trillions of cells. This is usually only done occasionally for active geneMod users or in special cases, like radiation poisoning or a geneMod gone wrong.
Pressure – This is often used as a way of helping identify the nearby other nano sensors. Sometimes used to augment the normal sensation of touch, but usually in places like the brain that we don’t normally feel it.
Temperature – This is often used as a way of helping identify the nearby other nano sensors. But of course knowing your actual body’s temperature compared to the perceived temperature can help a lot.
Hormones – Detecting the levels of your various hormones.
NeuroTransmitters – Especially useful in the brain of course. A particularly important issue is knowing when there’s an excess of used up neuroTransmitter chemicals and the brain needs to flush them out, aka, go to sleep. A process that can be vastly sped up.

There’s a whole slew of chemicals to track, both good and bad, plus specialised nanobots, like those searching for cancer cells. Also ones designed specifically for your skin.

Nanoblood sensor network – The positioning issue

So most nanoBlood sensors are very basic. They have the sensor package, be it chemical, pressure, touch or something more specialised, plus a transmitter. The use of millimetre ranged radio signals saves from filling up the bloodstream with physical chemical sensors like how the body generally works. It also allows much faster signals.
The sensors have very little power output and might only sample once a second or so, but they only need the signal to travel half a centimetre at most to the nearest relay station.
NanoBot relay stations are distributed around the body. You might have half a million sensors in your index finger and a few thousand relay bots there as well. They are bigger and don’t have sensors, just transmitters and receivers. They can also specifically relay messages to the sensors. The relays usually just forward data to their nearest neighbours until the data gets to a nearby accumulator. The accumulators are approximately the size of a pea and exist in your major bones. These are usually directly wired to each other and up to the main processors. The accumulators might receive a few billion points of data a minute and can store more than 4 hours of it.

The main processors are about the size of your finger and usually installed in your collar bone. The left and right processors work together in a redundant way, allowing for one to fail and the data to still be available. These main processors are where the powerhouse of work happens, from nanoBot signal processing to neuroMod task offloading.
Whilst they have basic 10m range wireless transceivers they are also used a lot in liaising with the shoulder mounted long range Maser system (microwave laser). The directed maser transmissions are how you can communicate with a drone or satellite flying many km away. The drones give off regular pulsed beacons indicating their location and your body fires directed radio waves at the location allowing far lower powered transmissions than would normally be needed over such ranges. Also increases privacy as it’s harder to snoop.

The main issue is that the nanobot sensors are dumb. They don’t know where they are. Their signals are usually very basic. The sensor reading (e.g current temperature, or a density of chemical reading), the sensor type, a unique Id and a counter. The counter automatically increments on each transmission.

Alone, this is hard to use. But the relays can specifically request readings from nearby neighbours and can map which other sensors are nearby. So you know physical proximity to the other sensors. The relay sensors have a timing counter with better than millisecond prevision but might not know the exact date and time, just an incrementing timer. This goes with the data packets. The accumulators do have actual dateTime information. So between all of that, you can know when a reading was taken.

Now you have a massive swath of uniqueIds and sensor readings and you now need to create a basic map.’

You need to work out where those sensors are and there’s a variety of techniques including:

Ping and traceroute tests. Similar to Internet servers, by asking the sensors, relays and accumulators to reply as soon as possible you can get a judge of distance based on how long it took to reply. You can also do trace routes, and work out how far away a sensor is based on the number of hops. Did the signal go through 30 or 5,000 relays and which ones?

Another method is to make known changes and look for which sensors reflect those changes. Warm up your hand, touch your face, sit on the ground, lay on the bed. Jump up and down. Drink water. All of these will light up different sensors. A big issue is that if you don’t have a high enough level neuroMod the visual and other neocortical information which gives great resolution to many of the senses isn’t available.

A 3rd option is external scanning.

The basic version of this is simply a video camera. This is used instead of your eyes for people who don’t have a high enough level integrated neuroMod.
But there’s actual energising scanners which are often integrated into hospital beds and in MRI looking machines. They work by externally triggering the relays or sensors in specific locations and mapping the responses. If it’s just a basic 2D scanner, like laying on the bed, then both visual information and things like rolling over and normal human motion can help increase the resolution of mapping. Having a scanner on the sides, giving two axis of transmission beams gives even greater fidelity, especially for your insides like sensors in your liver which are harder to know where they are inside of your body.

The encapsulating tube readers, can also enable a full body DNA scan if the right type of nanoBots are available. These are usually energised remotely and need to enter the cells. They can’t live between cells as many other sensors can.

Sometimes you just need a basic arm that goes part way over the body, like an arm chair rest. Or simply having a specialised doorframe is enough.

Obviously some nanosensors will be stuck between or inside of cells and are in somewhat static location, like by a bone, muscle or tendon. Others are in the blood stream and are moving targets. But the moving ones can be placed near known others so can be tracked.
There’s other issues, like the sensors break down and are replaced regularly by the nanobot factories. At a slow enough rate this isn’t a problem. You have a bunch of sensors in a known area and there’s some changes over time, but the positions are generally known. However the high metabolism mode that Eli enters causes a massive increase in nanobot turnover and he needs regular injections of new nanoBot sets, causing some of the mapping to become inaccurate, hence he has a basic external scanner built into his bed which works in with the video cameras in the room.


Remote Sensing / out of body experiences

Because the nanosensors can work wirelessly out of the body, you could feel information from the drops of blood nearby.

But it goes beyond that. The mapping of sensors and sensations can be done for objects out of your body. Say a door.
The door to my room feels the air conditioning on one side and the hot Vietnam heat on the other. It feels the wind, the cat walking past or occasionally trying to scratch it. The hinges know when they are dry and need oiling again. The handle knows when it’s being used and because of the force variations likely by who. The frame and door know when they are closed or open, but also if the house has shifted and the door doesn’t quite close properly anymore. But all this sensory information could be provided to you, once you’ve got a level 7 neuroMod. You could feel the door as if it is an extension of you.
You could then feel the sensations of a tree outside. Here the sensor dust network comes into play with GPS microdots could be used to calibrate the position of nearby smart dust, plus video cameras that track occasionally IR flashing dust sensors can provide high fidelity positioning.
So you could feel the warmth of the sun on the tree. The wind in the leaves. But also the ants crawling on it’s bark. The moisture in the air and the sap leaking from the wound when a bear swiped it during a fight. The tree obviously won :p haha

You could have sensors inside the tree with tell you about the root system and soil nutrients it’s uptaking. The moisture being raised through the trunk. The CO2 it’s pulling out of the atmosphere and using as a building block for creating more plant matter. How open the stomata are and how well it’s breathing.

That’s a single door or a single tree.
But you could also abstract up and ‘feel’ a whole house or even a whole forest. You would feel different information. The forest would include not just the trees but the deer and birds and ants and bugs and decomposing nature. The ecosystem.
The house would include bedrooms and toilets and electricity, Internet, power and water. With a whole neighbourhood being equivalent to the forest.

But how large can you go? Can you abstract to a whole country? To all the oceans? To a whole planet? To a collection of planets?

This is different to experiencing the life stream of a friend, be they human or animal. Those are based on the brains sensory perceptions and are neuroMod enabled streams which include conscious processing of the stimuli.

Whilst you could also have life streams of AI’s which are based on their own processing, and experiences (conscious or otherwise). Those aren’t the same as remote sensing which is about creating a new sensory system and new interpretation system. You’d need algorithms and AI to help with the mapping and data processing and making sense of the data. Turning it into sensations that we can relate to, or helping us develop new sensations we could never have imagined.

Mods and Apps – Science Fiction Concepts for SciFi writers

Science Fiction Concepts for SciFi writers.
Set 30-50yrs in the future.

Mods and Apps

* bioMods are for specific biological enhancements. These are usually a little bit more advanced than cosmetic surgery in 2017. The standard is the spinalTap mod, an enhanced spine and skeletal upgrade (usually including knees) that dramatically reduces skeletal issues. No more easily popped discs or dislocated knees or shoulders. Basically humans haven’t finished evolving to deal with walking upright and this helps complete that. It usually has a gMod (genetic engineering) component.

* geneMods are genetic engineering changes. Usually an injected retrovirus that rewrites your DNA. Think of it like CRISPR but working on all the cells of your body. Examples of this are the ability to change your skin pigmentation between normal shades over the course of a few days. e.g From 0 – Albino white to 5 – very black. Some changes are easy, like re-enabling a number of regrowth options already in our DNA. So you can cut your arm off and over the course of a few months it’ll grow back. Want to have your skin be pigmented to look like a purple dragon 🐉 and you are beyond normal gMods. But then, most people would just have an eInk tatto for that instead of changing those specific skin cells pigmentation.

* bodyMods are mainly physical implants of technology. Things like an LED light in your fingertip to having geiger counters or EM detectors built into your body as new senses. Want to read someone’s DNA by shaking their hand, or remove your stomach and just have print cartridge like nurtient containers that you replace? Sure. You can have the standard dental armour upgrade so you only have to brush your teeth once a year, or eInk skin that turns your body into basically a digital screen. You could become a chest breather, replacing your lungs with two holes just above your collarbone. Air goes in one hole and out the other, thus you no longer have a normal breathing motion. There’s also the usual assortment of faster legs and arms. Maybe your full cybernetic arm could have a nanofactory in it and you could leave a trail of smart dust.

* nanoBlood – Nanobot blood.
All versions come with automated spider, snake and other venom antidotes. Also helps power most of the other bodyMods and a form of nanobots in the blood is how the neuroMod slowly gets assembled in the brain over the months.
v1 : Probes which provide vast amount of info on your body. E.g 2 year warning of a heart attack, instant detection and treatment of cancer, radiation therapy (which is why Eli needs it), etc.
v2 : Red blood replacement which provides far better oxygenation ability and CO2 absorption so you can hold your breath for nearly an hour or can exercise much better. Also has amazing toxin scrubbing and processing.
v3 : The immune system is augmented. Nanobots acting as advanced White cells, T cells etc. Allows for not just a super immune system, but wirelessly transferred anti-viruses. Very quick detection of new infections, diseases and viruses and the ability to transmit an internal scan to be remotely processed and and anti-virus quickly developed and downloaded. Some neural implants can do a basic AV processing but it takes longer and takes up almost all of the processing power. Note: The nanobots are created in nanobot factories, usually embedded in the bone marrow and a couple of other points around the body, nanites (self-replicating nanobots) are NOT used due to their possible ability to go rouge.
V4 : There’s usually more than 30% of the blood as nanoblood. It also has DNA scanning facilities which, with the help of a machine that’s a bit like an MRI, allows all the cells in your body to have their DNA read.

* neuroMod – The nerual implant. See also the 10 levels of neuroMod integration. Most of the apps are aimed at level 6 or 7 levels of integration.

* bodyApps (as opposed to bodyMods) are those which use the neuroMod to alter how you move, think or interact with implants and body/bioMods the vast majority need a neuroMod and need special permissions. Kinda like when you authorise an app in Google Play, but with a lot more info about what it will actually do and not do. Especially when an app is initiating activities for the first time, like LieToMe changing the way your eyes scan someones face or the complex, stocastic movements of SpeakEazy changing how your facial muscles work to make it harder for people to detect your lies. bodyApps are more about movement and control. The Posture Pedic bodyApp is installed by default (for those with the Trev special set of mods), and goes well with the spinalTap bioMod. It makes you sit up straight, keep your head back (not in the forward head posture) etc.. Works to reduce most muscle fatigue and skeletal tension. There’s versions for running better, meditation, various martial arts. This is different to the “I know Kung-fu” part of The Matrix. It’s a neuroMod running to alter your motion control. It changes how you’d try to move.

* visionApps – As with other apps, this uses the neuroMod. You can get Public, Private and shared (group) augmented overlays. But advanced vision mods can tap into how your visual processing neurons work. You’ve got to be careful when you start changing how you perceive straight lines or other core things. It’s very easy to go into dangerous trips with reality distortions few drugs can even get close to. Whilst these days they are automatically detected and the changes reset, similar to going into a ‘preview’ mode of new monitor settings and it auto-reverting, it can be possible to get yourself stuck in a mode where you simply can’t navigate to cancel such a mode and can cause long term pshycosis. Note: You can get eyeMods which are specific eye replacements. This is what the Death Squad have. Their eyes glow red because it’s creating infrared light to help them see at night.

Some Cool Apps

NeuroTelepathy or usually just known as Telepathy is the app which lets you talk with other neruoModded humans but also with AI and modded Animals.

There’s various levels of communication. From the equivalent to IRC chat or normal speech. Very carefully controlled to pre-recorded and edited thoughts with concepts, visuals, feelings and the like to a full stream of consciousness. As you feel it or think it with only basic filtering, e.g remove most background body sensory info, anything sexual or socially inappropriate.

You can also send concepthesis concepts and more.
The Mind Meld app is the NeuroTelepathy app with a 2 (or sometimes more) way merge and no filters. Obviously named after a Vulcan mind meld.

Penfield – Emotional control
Psychology and personal mental control beyond any advanced meditator. Like the Penfield Organ in Do Androids Dream of Electric Sheep.
Gives you the ability to control your emotional state and even your thoughts with great precision. If you want.
Most people who use Dataism as their new religion give this control over to AI algorithms which can optimise their life for being the most rewarding and fullfilling, with lots of time in the state of flow. Thus providing great satisfaction beyond just being stuck in the ‘Happiness’ setting.

BabelFish – Universal translator
A core app.
Can work on both audio and visual inputs, acting like a super better version of Google Translate’s camera mode, or the conversation equivalent. Often the main way you know it’s even working is because the audio/video signal has a ‘translated’ tag added and you can toggle the translation layer and see the underlying signal before translation.

It also lets you speak or write nearly any language, if your vocal cords or hand is capable of the output. Often computer to computer digital speech and text isn’t really replicable by humans without speakerbox bodyMods (speakers instead of voice boxes) or text printing capabilities… Or usually just a digital display like eInk skin.

Lie To Me – Lie Detector
This works through all the usual input systems with some degree of accuracy against those not actively blocking it.
It hooks into the brains existing system 1 Fast triggers, but is also able to analyse with a lot more skill and accuracy than most people’s innate lie detection. It usually help focus the fovea on the person of interests likely telltale signs, mostly on their face, from their eye movements to minor muscle twitches. Because it’ll urge the user to look at certain points (they’ll want to do so), it can often be detected by others, so more stealthy modes are available, or usually people use an alternative visual input, like a nearby drone.
It uses the latest in nerualnet and other AI analysis to make you instantly a better lie detector than any unmodded person on the planet.

SpeakEasy – Lie to people / Pokerface
Usually the SpeakEazy counter app is also installed by people. It started by detecting the other persons obvious signs of using the LieToMe mod.
It then tried to intercept your tells and stop those micro-expressions. This dampening of your tells works well against muggles (unmodded people, but the weird times of dead movements then became a tell themselves.

Because such dampening is detectable the normal mode when talking to someone else that is also modded / with a Wizard Hat does the equivalent of creating white noise but with your face muscles, eyes and nearly everything else. You’ll have a sea of random micro-twitches, erratic heart rate and seem jittery to the system in a way that masks your emotions and signals by looking like you are changing emotions and sending lies and truth signals in such rapid succession it becomes meaningless.
Think of the big face of The Matrix core that Neo talks to at the end of the 3rd Matrix film, or other such particle, water or electrical based characters. There’s general form there but it’s annoying to view for a long time. Negotiators refuse to talk to people with this mode on, but those doing the last remaining bits of capitalist politics (in the non-RBE cities that aren’t New Eden) have this mode on by default.

People using the noise version often refer to it as PolkaFace, a play on poker face.

Look to the Stars – Where Am I ( Night Sky )
Look into the night sky and based on the stars know your position in the solar system (not just lat/long on Earth or Mars but anywhere out to the Oort cloud and within a +-300yr time range). The full version attempts to work out your position in the Galaxy over a +-5,000yr range and within the Galaxy. Although the full version needs a larger download and is much faster processing if you have a smart watch or even a space ship to help with the processing.

See that Key
Look at a key or any object and once you’ve seen it from enough sides you can have a 3D model generated and be able to 3D print it.
Works great when you’ve got a large drone nearby that can 3D print it for you, or are simply near a city which has fast 3D printers.

Morse code reader
Once installed this runs in the background looking for morse code signals, especially audio beeps and light flashes and will detect and convert. This allows for conversion of QS and other morse codes into their general understanding.
It will also detect binary, and other basic protocols by default.
This is often hooked up to people’s flash light fingers and it can be fun to see two kids running around talking via their fingers flashing, although there’s more advanced transmission protocols used for tapping and vibrations, so someone just tapping on a table or tapping on their friends hand whilst holding hands, or leg when cuddling can be a form of talking behind other people’s backs.
As with many of these things, there’s an ever evolving war of encryption and decryption, kids using a greater variety of ciphers, although the generic decryption tools and AI decryptions make basic changes to the ciphers easier and easier to break. Although the normal child / teacher empowerment dynamic is a lot more like that of a respected mentor so is rather different to normal school setups.

The voiceBox bioMod, allowing for a built in speaker, whilst not nearly as popular as light fingers is often used to transmit on high frequencies most humans can’t speak at and usually can’t hear without hearing bioMods but is often used by cohesive groups that are modded. But nearly all people with such bioMods also have the neuralMod implant so just communicate via standard encrypted wireless telepathy.

The development of the morse code app is often used as a standard example of app development of its kind. The first developer thought it would be cool to read morse code like they do in all the movies, without actually having to learn morse code so worked out how to hook into the various pattern recognition systems of the brain and neuroMod. It’s usually a combination of system 0 (basic visual neural detection of lines, shapes and time repeating patterns) plus the first order, system 1 (fast) processes which usually detects the flashes. By buffering a few seconds of input on a background thread that’s analysing the signals it can try and find anything that looks like morse code. Once detected it can analyse the input with greater focus (be it visual, audio, kinesetic or even smell) do more specific optimisations around the particular signals ‘hand’ (timing patterns, be they miliseconds in duration or minutes) and do more noise reduction, context analysis (does QSR mean a general morse code short hand, or just the 3 letters) etc…
Special patterns, like SOS are extra highlighted.

It started as more of a gimmic than anything, but others took the code and made it easily extensible so new filters could be added, new algorithms detected. More hooks into different areas, like concepthesis messaging (concepts and learnings) available as sound or touch for those who don’t have normal wireless internet to the brain enabled, usually enabled at museums or art galleries as a form of disabled / fallback support.
It can even detect messaging shown via agumented reality or as simulated external sensory input (sound, touch, vibration, smell, test) during immersive VR sessions, etc..

Then there’s a whole host of different detection algorithms, hooks into apps like the more powerful generic decryption systems. These can offload processing amongst a neruoMesh (group of other people with neruoMods), allow for joint detection of inputs (e.g thousands of people worldwide getting small pieces of the puzzle) and of course the ability to scan over very long time periods, like years not seconds, but most of those are all rarely used addons mostly done for fun, although some cool detection of earthquakes by peope who were meditating was possible on a neuroMesh. On the direction vibration sense some people will augment their heart beats to act as a morse code, detectable by their partner just by holding their hands when fairly quiet. Although heart rate detection of others is fairly easy at a greater distance with some of the infrared eye bioMods or tuned electricial EM field detectors, be they internal to the person (thus requiring a lot of work to cancel out the detection of their own nervous system’s EM activity) or external, like built into the walls and ceilings, usually trying to focus on bioelectric signals not normal wireless transmissions.

4 Versions of the Olympics
#1 – The existing normal Olympics. Healthy and unaltered. No drugs, nothing. With #1D being the disabled Olympics, although that’s a LOT rarer given most people opt for replacement grown limbs or even end up in the #3 tech enhanced version with bionic limbs that were better than what they used to have. Actually it was because of dealing with disabilities we developed such good bionics.
#2 – Drug enhanced – Humans using drugs and other general enhancements, but nothing we’d consider active or passive technology.
#3 – Tech enhanced / Cybernetic – People with nanoblood, implants, bionic limbs and many of those beyond the Kubler cascade
#4 – An android only version – Mostly for robots only. Although there’s occasionally matches between humans and Androids, e.g the Robocup Challenge style Soccer against humans and Robots. Although usually only full cybernetic enhanced people have any hope against even reasonably well optimised robots. Often the Androids will zoom on their equivalent of roller skates instead of pumping their legs, or will have very different forms of locomotion and there’s some surprising ways that the genetic algorithms for say long distance Javelin or high jump can create amazing robots. Few people even consider it the same sport, but then, the androids didn’t really see much to sport. The more interesting robots are the ones that attempt to be able to beat the best enhanced human in each area but without replacing parts. So being better than the best human at not just throwing and jumping but shooting and running and playing sports and swimming. To be good at all means some very interesting trade-offs and engineering feats.

Kubler Cascade

The Kubler Cascade is when there is some sort of driving force in a persons life which makes them want to be more augmented, e.g wanting to be the best athlete or fastest tech head they can be and in doing so quickly jump from the generally acceptable 40% range to the 80-90% level of augmentation. (I’m assuming that) It’s harder to go beyond that level due to technological difficulties and it’s usually easier to jump from that level to being digitised and having your consciousness running completely inside a computer. However, usually it’s a different force or pressure to make someone want to be digitised.

The other point to highlight is the fact that most people are fine with the up to 40% augmentation. This usually involves neural implants, basic genetic corrections, enough nanobots to ensure they are healthy and have an abundance of sensors to know if there’s anything wrong with them.
But when you apply a competitive force and you start replacing your legs with fully technical ones which allow you to run over 100km/h to do that with anything more than a quick burst you need to feed them with enough energy, so you need to up your nanoblood levels, replace your heart, become a chest breather and replace your stomach.

Obviously people augmented people in the tech enhanced version of the Olympics are the most susceptible to this.

Often the Kubler cascade is also defined as the point before passing you are still considered a normal homo-sapiens, but after which you are now classified as something else. Some people use the popularised term homo-deus or God like human. Others use De novo sapiens or just novo-sapiens meaning anew humans. Or the more pretentious French version Nouveau-Sapiens.

10 Levels of neuroMod Integration
There’s actually 7 main levels of neuroMod integration.
Once you go past around level 8.4 of integration there’s issues with being able to reverse the process and remove the neuroMod. Too much of your brain has been replaced. Hence most people are between level 7 and level 8.2 integration.

Level 0 – Nothing. You are an unmodded. A muggle.
Level 1 – The seed has been injected and there’s a nanofactory running but it hasn’t connected with anything.
Symptoms: Some light tingling at the site of the injection and maybe a tiny bit of discomfort where the neuroMod factory implant has an injection site poking through the blood/brain barrier.

Level 2 – The neuroMod is just beginning to integrate with some nearby neurons. It’s ensuring compatibility, ensuring the body doesn’t reject it.
Symptoms: During this time there might be small, barely noticeable glitches. Things like unexpected memories. But it can sometimes also trigger an out of body experience.

Level 3 – The neuroMod now has communication with the Internet via the wireless chips in your collarbone. It’s also creating main pathways to the important parts of your brain, the motor cortex, amygdala, frontal cortex all along between your optical nerves and visual cortex and it is generating more of the main highway infrastructure. The main tree branches.
Symptoms: There’s the motor cortex and sensory cortex. You could have issues where the sensation is there but you can’t move or vice versa. This would cause some weird lack of Proprioception, the feedback loop of going to move something and then feeling the texture, weight and other senses that let you know about that object.
The weirdest is when it mutes the channel about reliability / probability of sensory input, everything feels uncertain. You don’t know the certainty of what you are perceiving, it can get very weird.

Level 4 – Initial visual integration. You’ve got visual overlays as the neuroMod is now interrupting the optical nerve.
Symptoms: At first you get vision that seems empty (no signal) and the brain fills it up with imagined creatures or shapes ( like the visual aura I get before having a migraine or Oliver Sack’s talk about Charles Bonnet syndrome https://www.ted.com/talks/oliver_sacks_what_hallucination_reveals_about_our_minds ), or distorted and weird. Although by the end you have Augmented Reality and basic close your eyes Virtual Reality. You interact with the overlay using basic eye tracking gestures (seeing as that’s controlled by the lower level lizard brain not your motor cortex… which is cool).
Also, the midbrain controls your voluntary eye movements. What happens when it feels like the neuromod forces your eyes to view something? I’m guessing there has to be a thought like request or recommendation before taking on an eye movement without it feels weird, but it’d be simple and subtle. Could be a weird sensation to make use of esp during AR gaming.

Level 5 – Full Motor cortex, and emotional integration. It can now change the way you move and how you feel.
Initial sensory based Life Stream recording happens here as all the main sensory input is being recorded and the global workspace (conscious thought) has been decoded to enough of an extent.
Symptoms: Sometimes you’ll have weird twitches as it triggers some muscle groups and sudden emotional outbursts or numbness. You’ll also likely get some brief out of body experiences.

Level 6 – Thought manipulation. No longer just reading your thoughts, able to change and manipulate them. Able to start mapping your memories.
Symptoms: <Insert>

Level 7 – FULL Integration. You can have Full Virtual Reality (FVR), Matrix style. An immersion so strong that without certain restraints and your memories you might not know you are in a simulation.
By now you can start to think about 3x faster than normal and sleep only 2.5hrs a night. Your a Wizard Harry!
Symptoms: <Insert>

Advanced neuroMod Levels
Level 8 – Technomancy. This is for the speed freaks and involves using the neuroMod along with a whole bunch of enhancements to increase the speed of your thought by another 5x, so 15x faster than normal (or 23ms instead of 350ms response times). With certain very special capabilities, like bullet time, being done as ASIC like dedicated hardware in order to get something decent out of 350ms of bullet flight time you’d want 5ms time slices for 70 frames of action and reaction, so it’d feel like about 3s.
This is the level that the technotopians are required to be at, but also the negotiators as they have the Bullet Time mod.
Symptoms: Your able to process things so fast that conversations with people of a lower speed level aren’t possible in realtime thought. You have to buffer the input and output. You also find that Internet latencies become noticeable so huddling near data centers or physically being near the groups of people you are telepathically talking with becomes important. Also, the amount you sleep will have been drastically reduced to about 30mins a day due to advanced neurochemical cleansing.

Level 9 – ? Are you human, cyborg or what?

Level 10 – You’ve not longer got a biological brain, it’s completely replaced. This hasn’t been achieved… Yet. But people are working on level 9 and 10.

Level 10+ would be a fully digitised consciousness.

Initially complied by Michael Kubler on the 19th of October 2018 from a variety of his ideas for the Book of New Eden novel. Inspired to post this by Geoff Kwitko’s Live Stream talking about his interest in SciFi.

Spiral Dynamics integral – An Introduction

Spiral Dynamics is somewhat a system of Psychometric Profiling. Think of Myers Briggs or the OCEAN model mixed with Maslow’s Hierarchy of needs, but a whole order of magnitude more than that.

The core idea of the model is that people focus on certain aspects of life at different stages. At the higher stages they are usually in a higher order level with more complexity.

Whilst Spiral Dynamics is the work of Clare W Graves, Integral Theory is the work of Ken Wilber and SDi or Spiral Dynamics Integral is the merging of the two concepts.


  1. Survival (Beige) – Where all the attention is focused on survival.
  2. Tribal (Purple) – All about social relationships and maintenance of customs.
  3. Egocentric (Red) – Explore personal identity and challenge tribal authority figures or belief systems.
  4. Authoritarian (Blue) – In order to understand purpose of life and have security need to obey higher authority and rules.
  5. Enterprising (Orange) – Awaken independence and achieve results while challenging authority and test possibilities.
  6. Humanistic (Green) – Seek love and peace within through sharing and becoming useful in the community.
  7. Integrative (Yellow) – Live free and explore what is life about while understanding that chaos and change are natural.
  8. Neo-tribal (Turquoise) – Experience the wholeness of existence by becoming one with all things, and restore natural harmony and balance.


Spiral Dynamics outline:


Spiral Dynamics outline

Integral Theory is based on the 4 quadrants of I, Me, We, It’s.
A great visual explanation of the quadrant :

Video: Integral Theory in 5 minutes


Podcast: The Graves Model (Spiral Dynamics)

This is a really great podcast. Listen to it with a piece of paper and a pen so you can write out the levels yourself and better understand the stages.



Some Resources

https://www.toolshero.com/change-management/spiral-dynamics/ – A great introduction article.

https://www.slideshare.net/wrightleadership/culture-scan-an-introduction-to-spiral-dynamics – A presentation slide set.

https://www.bewellbuzz.com/wellness-buzz/spiral-dynamics-evolution-human-consciousness/ – This comes across as a bit more hippy than the actual theory is, but it might resonate with some people, especially those on the Green level.

https://www.reddit.com/r/Makingsense/comments/6temd1/spiral_dynamics_crucial_knowledge_about_the/ – A great Reddit post about it all.

http://www.spiraldynamics.net/ – Some official looking website. Most official looking sites want to charge you a whole lot of money so you can attend a course to learn more.

http://www.clarewgraves.com/theory_content/conceptions/intro.html – The official Clare Graves website and some content.

https://www.valuematch.net/en/spiral-dynamics-assessments/value-assessment – An example of hour Spiral Dynamics can be used as an assesment tool for understanding people’s levels and behaviour.

https://medium.com/@sterlingcooley/coral-meme-spiral-dynamics-level-9-3db412799198 – An introduction to Level 9 the Coral world view. It’s still very new and almost no one is at that level, so it’s a lot of speculation.


19th Jan – Looking back at the last week

Wow, what a crazy week.


Last weekend I photographed some matches of the Volley Ball SA Beach Volley Ball open which was for the Commonwealth games selection process.


Whilst at Glenelg I also swam out past the jetty. Although parallel to it and far enough away I wasn’t going to get a fishing hook to the face. The water 💧 was warm but waves fairly strong.
Also got to have a good dinner with Jason who’d been the official photographer for the event and had borrowed a bunch of my camera gear which was why I came down to visit the first time. I returned because it’s fun and I’m not likely to get much time to visit the Adelaide beaches in a long long while.
I was late on Sunday because I was trying to get into the Selfkey ICO, but they’d already sold out.
Ohh and that Sunday evening I checked out the TDU – Tour Down Under event in the city with my older Bro and his family.


Monday after work I visited a potluck house party and we watched Hunt for the Wilderpeople. Damn I love that. I now have to go watch all of Taika Waititi’s work.

Of course, chatting with awesome people until past midnight then a nearly 45min ride home and another hour to cool down. The 2nd night of not enough sleep before work.


Tuesday arvo I went to Salsa dancing classes at Latino Grooves. It was the first class of the year and only the beginners class. I did some classes last year but jumped into the 3rd of the season which was intense. So this was good much much easier for me. Hint to guys: There’s a LOT of girls doing dance classes and by redoing them at a slightly lower level you’ll be impressively awesome to them, haha 😂


Heading home from Salsa I pre-ordered a Subway sub using the OTR app, giving me a chance to test the changes I deployed hadn’t broken anything obvious. Plus I was hungry and 😴
Thing is, the police had cordoned off the main road where that OTR is. from the Maid and Magpie up to the Avenues Shopping Centre, but they let me through from the other side.
Got home, slept for an hour, got up in time to research the next ICO, AidCoin and had my transaction done within 20s of it going live. Yay. I’m getting better at this and now properly grok the Ethereum Gas prices.


I’ve been listening to the audiobook of Tim Ferris’s 4 hour work week and it’s very appropriate given I’m planning to spend a month in Germany and 6+ months in Vietnam.
One suggestion I want to try is an information detox. I’ve logged my phone out of Facebook and have been better at adding articles to pocket to theoretically read later. Although that lasted about 6hrs…


Wednesday has been dealing with broken stuff. A website down, backup server borked, stuff broken in testing at work. But crypto price dropped a lot. Time to buy!! If only Bpay didn’t take so long.


Ohh and I announced to the two remaining activism groups I’m a part of that I’m leaving. Need to hand over the responsibility for Zeitgeist Movement South Australia and also the CLEANSA website.


Thursday was wedding day for Kate and Jordan. A nice hot one in the middle of the day, but under an amazing tree, so that was good. Caught up with family I haven’t seen in a very long time and others I haven’t seen before.
Someone ended up with their keys at home, long story, but I drove them and some family to go pick them up.
The reception at Waterfall Gully was nice. The 42.5°C day wasn’t. Even opening the car door nearly burnt my hand and the aircon really struggled. The TV screen in the car complained it was too hot and shut off (I don’t use it anyway, but it was interesting to see).
Home for just long enough to change into something cooler and watch a little Marvel’s Agents of S.H.I.E.L.D before heading off to the Norwood Parade. It was for the the usual Thursday at Boun Giorno’s, except today was some annual food and wine festival. Helped keep it interesting.


Thursday night after dinner I managed to catch up on the rest of Agents of S.H.I.E.L.D and get a good early nights sleep, I think I was dead to the world by 12:30am.


Friday, another hot day.
Hot enough that the aircon at work really struggled and the Fire Alarm kept going off despite there not being any smoke.
But some Cards Against Humanity kept things fun.


Then it was Friday night. Writing night in the city!
Whilst riding in I happened to briefly caught up with Pas, an awesome friend. Then some pre-emptive shopping of things I’ll need for my trip overseas, like antihistamines. Had a shoulder, neck and back massage, then a burger at Lord of the Fries. I was ready to write. Except it was too hot to write outside and found out that damn it, Movenpick is still closed!
They used to be my jam. I’d write in the mall until everything closed past 9pm then head to Movenpick until Midnight. Thankfully I found somewhere close, air conditioned and seems pretty carefree about how long I stay. Australia’s Pizza house. Just next to Zamberro’s. Seats about 40, but rarely had more than 4 people, with one being an attractive waitress as an added bonus.


But alas, the week is over and I’m still massively behind in all my plans even if I have been enriched in experience.


Tomorrow I’ve got to clean up my room. Ohh. Yay… But it’s for a good cause. I’m showing a friend how to use a light tent and maybe some of the basics of Crypto, but really I want to talk to her about the big stuff. Comparing The Zeitgeist Movement with Deep Green Resistance and ways we can make a meaningful difference in society.
Sunday is a CORENA committee strategy meeting. Ideally I’d have some draft of the video I should have been putting together, but there’s no chance of that.


Maybe next week will be calmer. The menu calls for meditation, regular sleep and a touch of Salsa dancing. Having Friday off for Australia day will be good. I might get a chance to actually book my plane flight, work out visa’s and medical insurance.


I also have to work out what to do for exercise. I can’t risk doing a long run with my Achilles Tendon as injured as it is. Breaking it before going overseas is not advisable. Maybe I’ll have to be a Middle Aged Man Not-In Lycra. I’m not buying some Lycra for bike riding, but a trip to the end of Linear park would be good when it cools down.


Well, that’s my stream of consciousness of my week done. Yet another snapshot of my life for the archives.
Michael Kubler
19th Jan 2018.

2017 – The Year that was

So it’s the last weekend of the year and time for a more in depth than usual review.


I do a weekly review, usually just spending a couple of minutes looking at the list below and thinking about each item a bit.
Recently however I have realised I need to start writing my thoughts down and rating these so I can compare over time.


Review :
  1. Finances – Sent my invoices? Paid my bills? Next bills coming up? Financial snapshot? Assets and Liabilities listed? Working on creating a passive / portfolio based income stream?
  2. Health and Fitness
  3. Family and Friends
  4. Production vs Production Capacity – Balanced?
  5. Household tasks and general organisation
  6. Recent Short Term Wins
  7. Long Term Vision
  8. Important but not urgent tasks
  9. Learning and Education (what to direct my learning towards next)
  10. Teaching, Training, Writing and sharing my knowledge.
  11. Willpower challenges and strength.
  12. Food/Nutrition.
First before I do this I want to take a general overview of 2017. To put it succinctly this was a year of recharge and rejuvenation. I was focused on myself more than the community and I’m in a better spot now than this time last year..


Also my outlook for 2018 looks to be interesting.
I’m going to be handing in my resignation effective the 1st of April. I figure 3 months is plenty of time. My current plan is to spend about 3 weeks in Germany, the initial aim is to be there for the 2018 Zday conference, helping organise and film it and catch up with the amazing activists and people. I’m expecting it to be about 1.5 weeks. Depending on people’s availability I’ll go to Florence, Vienna and Frieburg to visit friends in Europe, then spend a week or however long I have left in Berlin trying to soak up some of the Startup culture they have there.
Then for the big bit. I’ll be heading to Vietnam (or maybe Thailand) and spending 6+ months writing my sci-fi novel, currently titled The Story of New Eden.


The reason for spending time overseas is twofold. I’ll be able to focus better. If I spend time in Adelaide I’ll be drawn into everything here and take on too many distractions. Secondly with the same amount of money I’d likely only get a couple of months in Australia.
If anyone knows of some good places in Vietnam, Thailand or similar that are: Cheap to live, have some Internet access, have enough English speaking people and doesn’t close early (I’m a night owl, I’ll happily stay up till 3:30am and sleep till 12 noon), then please let me know! I realise that the cheap to live conflicts somewhat with the Internet access and not closing early as usually rural places are cheaper to live but a city will provide the other parts.


OK, so my review:


Finances [ 0.5 -> 8 ] ( BIG change for the better! )
The financial snapshot I did on the 20th of Nov 2016 shows me in $10k credit card debt having maxed out two cards. I had about $3.6k in operating money to cover expenses. I even owed my girlfriend at the time’s Mum as we needed a loan from her earlier on to cover rent. Things were hard.
I’d rate this a 0.5 out of 10. I’d hit rock bottom, although I’ve done so a couple of times before, but this was hard to get out of. Trying to do a startup whilst also covering both my expenses and my girlfriends and living in a more expensive place, it burns through money really fast. That said, it was manageable amounts of debt and something I’ve since bounced back from. With a full time job doing web development at The Distillery and some investments reaching maturity I have now paid off all my credit card debt, closed one of the cards (the remaining is for emergency only use) and have nearly as much in savings as I had in debt, I also have some more investments I could draw upon if needed. Not too bad! From financial struggle to financial thriving. Although I expect the trip overseas will likely cost me about $10-20k given the time duration and the fact I still have to pay for web servers, storage and other monthly costs on top of plane flights and of course general living expenses.


Still, I’m a long way off of my goal. My aim is to help create an NL/RBE style (Natural Law/ Resource Based Economy) sustainable town. Well it’ll start with a small village first. The thing is, I expect it’ll take about $30 Million in initial setup funds and I’d like to contribute up to 49% of that (don’t want to be in a majority shareholder like situation).


Health and Fitness [ 9 -> 4 ] ( Yeah, kinda injured )
This time last year I was a lot fitter. I had done the 12km City to Bay in under 55mins. However during my trip to New Zealand this year I injured my Achilles tendon and also dislocated my knee. I thus spent much of the year not exercising and went from under 5mins/km to over 6.5mins a km. I’m desperately trying to get some of my fitness back, but whilst doing an 18km run doesn’t seem that hard, it’s really bad on my ankle and I just can’t run like I used to. Instead I’ve been doing 8km runs and only last night managed to do it averaging 5mins 25s a km. Finally an OK speed, but now I have to work on distance again.
I’m only running once a week, mainly due to full time+ work and all my other activism and other tasks making it hard for me to get home and do a smaller run in the middle of the week. But also because I don’t want to damage myself further. Hence going from a 9 down to a 4. The one upside to not running as much is that I was doing a lot more stomach exercises, but whilst I’ve still got some flab around the stomach that doesn’t really show as well as I’d like. Thankfully diet changes seem to be helping with that.


Family and Friends [ 6 -> 8 ]
I admit it, I’m not that close with my family. Not compared to some I know. I can go for most of a year without talking to my brothers and sister and many months without talking to my Mum, but I’ve been a bit better this year.
My sis is in an interesting spot, injured her leg and is also pregnant so I’ve spent a little more around around catching up.
Also, I’ve been living at my Dad’s. So instead of only seeing him once a week or two at the usual Boungies dinner it’s now daily.
I of course moved in with Dad at the start of the year when I moved out of the rental property I’d been in with my now Ex. I was going for a 6 week trip to Brisbane and New Zealand and basically just had the room full of gear that I couldn’t easily fit into my storage unit. But I came back from the trip and organised things and have been living here since. I was going to move out a couple of months ago, but realised that I still need to keep saving for my trip, that the place I was going to move into didn’t have fixed Internet access and that this is a great time to be spending with my Dad whilst I still can.


Production vs Production Capacity [ 5 -> 7 ]
Last year things were hard. I was emotionally and financially drained and at times depressed. I was meant to be working on my own startup but instead spent a lot of the time addicted to computer games. Over a 2 year period I played over 1,500 hours of Ark Survival Evolved. That’s a LOT of gaming. It was a coping mechanism, but not a very good one.
This year, I’ve been through a number of really intense periods, followed by the daily grind which is still fairly draining. But I’ve been a lot better at detecting when I’m not in a good state and recuperating as needed.  As I once read, you can endure world class stress if you have world class recovery.
So when work felt hard and depressing, I spent a couple of weeks watching TV shows like Supergirl. Instead of feeling really bad about not getting done the things I felt I should have been doing I focused on the positive feelings and tried more gratitude meditations, simply smiling a lot more and taking solace in the fact that things are getting better. Afterwards I was feeling back to my usual productive self.
Despite my injuries I’ve still be exercising where and how I can, including doing more biking riding or abs and arm exercises if my legs weren’t up for it. Between that, meditation, occasional TV shows and the odd computer game (barely any gaming compared to the last few years), I’ve been keeping myself in balance.


But I’m also a lot more productive this year than last. Full time work instead of 3-4 days a week. I’ve only managed a couple of weekends working on my own programming projects (learning ReactJS, GraphQL, etc..), but I’ve got a habit now of spending Friday evenings after work writing my novel or at least going into the city and writing. Sometimes I’m really quite productive, sometimes I get distracted by friends and acquaintances and other times I actively seek any chance encounters with friends. But it was this start of a good habit, but realising how I need to really buckle down and spend a big chunk of time, without any major distractions just trying to grok all the snippets of ideas and parts of chapters I’ve been writing over the last 4+ years and assemble them into a cohesive story. I suspect the hardest bit will be throwing away all the bits of story I want, but can’t fit in.
Household tasks and general organisation [ 7 -> 7 ]


The nature of this has changed from me organising a whole house and even having to move out, basically on my own, to really only looking after a room and trying to clean up the kitchen after using it. I used to do a lot more batch cooking and cleaning. I’d do larger cleans once a week or so, but here in a more shared environment it’s expected you clean up after yourself sometimes before you’ve finished eating (depending on when others want to use the kitchen).

My room is a mess, because it’s also a working space. Right now I’ve got years worth of paper documents to scan and organise. I’ve not got a nice fast multi-page feed scanner and once digitised I hope they’ll be a lot easier to manage and I’ll feel a lot better about their safety and longevity. Yay for searchable PDFs and cloud hosting!

That said, I’ve got a bad habit of doing the washing on the Saturday and going for a run on Sunday. The sweaty running clothes start to stink up the room by the time laundry time comes around again.
Whilst I’m pretty good with organising my digital photos, I desperately need to sort through my browser bookmarks and some of my archived files.


Recent Short Term Wins [ 8 -> 7 ]
So, it’s near the end of the X-mas and New Years holidays. but I’ve been driven and motivated. So far I’ve wrapped all the presents, travelled up to Murray Bridge to visit family for xmas. Checked out the OTR Motorsport park by Tailem Bend, had a Xmas dinner with other family in Adelaide.
I’ve backed up my laptop and then wiped it and re-installed Windows 10. I’m still in the process of installing everything again.
I’ve also scanned hundreds of documents and receipts, gone to my storage unit to get a WHOLE lot more and managed to watch some Firefly (It’s still good, even if the graphics are a touch dated). I’ve also done a financial review, an 8km run, finally got a staple remover (can’t scan documents that are attached together) and I’ve organised my draws, washed the bedding, etc..
Unfortunately I was 10s too late getting to MSY to buy the 8TB ext HDD I wanted to buy. They closed exactly an hour earlier than the sign on their door and Google suggested, which was annoying. But, I’ve got until April to buy one and fill it with my photos and any content I’ll want to have with me during my trip. Considering I’ve got a 25TB NAS and 9.5TB in photos and videos I’ve taken, plus about 4.5TB in Movies and TV shows I could watch. Well, the hard thing is working out what to bring with me, especially when there’s the nagging feeling I could loose everything on the NAS.


Unfortunately, whilst a lot of that helps me work and be more productive for the next year, I didn’t get to spend any time working on my own projects, something I wish I had more time for.
Still, this time last year my girlfriend and I were breaking up and my life changed from a downward financial and emotional spiral into a recovering one.


Taking short term to mean over the year there’s been some great wins.
After over 5yrs the Repower Pt Augusta campaign has won! A 150MW Concentrating Solar Thermal (CST) plant is set to be built in Pt Augusta.
I spent 6 weeks off work, most of it in Brisbane helping out with the epic 2 day global Zday event there. I helped organise it, film it and present at it and was rewarded with some amazing conversations with passionate geisters. I then got to do it all again but on a smaller scale at the Auckland Zday even in New Zealand.
Ohh and Elon Musk visited Adelaide twice, once to announce the big 100MW battery and another to announce updates to the plan for getting to Mars at the IAC – International Astronomical Congress. Although I didn’t get to see him either time.


Work wise, I worked on the www.southaustraliantrails.com website, especially on importing the large amounts of trail data.  Although the biggest project of the year was the Phase 2 release of the On The Run mobile app. I’ve been doing the backend programming and still am. It’s a living breathing project that is continuing to have new features and refinements and is rather back end heavy.


Long Term Vision [ 6 -> 8 ]
I feel like my core archetype is a problem solver. I want to fix things. I’ve got secondary characteristics based around being connected with people and whilst few would believe it, also elements of cleaning.
But, that core problem solver part of me learnt about 3 main concepts. The Kardashev scale, Crash Course and the NL/RBE.


Kardashev Scale
Firstly is the Kardashev scale. A basic way of denoting the size of a civilisation. A Type 1 civilisation is planetary scale, a Type 2 is solar system sized and Type 3 is galactic. It’s actually based on the power availability and basically humans are around a type 0.72, Star Trek is a type 2 and Star Wars is a type 3.


“A common speculation suggests that the transition from Type 0 to Type I might carry a strong risk of self-destruction since, in some scenarios, there would no longer be room for further expansion on the civilization’s home planet, as in a Malthusian catastrophe” — Kardashev Scale : Civilization implications



My core aim is to help humanity become a Type 2 civilisation. Most likely after that it’ll likely be computer/AI systems or species different enough to not be classified as human to us now who’ll become a Type 3 civ.


Crash Course
Many years ago I watched the original Crash Course video series by Chris Martenson. I would’ve seen it in 2008 or 2009, when online videos was still somewhat new. But it pointed out quite clearly but in detail how humanity was facing three main crises. Energy, Economic and Environmental. It mostly came down to our dependence on oil, our dependence on Capitalism and it’s need for infinite growth and the environment effects that both of those have.


Now, nearly 10 years later I’ve been a part of the Repower Pt Augusta campaign which championed Solar Thermal as a replacement for an old coal power station in South Australia, plus I’m a board member of CORENA. Without abundant energy nearly everything is impossible. But renewables is an energy source that gets cheaper over time unlike finite fuels.


Basically the Crash Course shows the major obstacles we need to overcome just to survive.


The Zeitgeist Movement
Watching Zeitgeist Addendum back in 2008 or 2009 was a pivotal point in my life because it was when I first heard about the Resource Based Economy – RBE concept. Proposed by Jacque Fresco of the Venus Project the concept is for an economic system which has sustainability and human wellbeing at the core. But it’s based on systems design thinking, automation, and things like access over ownership.
The concept has been refined over the years and the Zeitgeist Movement, and advocacy based group has added the Natural Law part making it the NL/RBE. The idea is to being living within the carrying capacity of the planet. But it’s so much more. Different governance, different ways of using and allocating resources based on need not money.  It ends up being a post-scarcity / abundance centred society.
Basically, by changing to an NL/RBE humanity and the planet can thrive and become a Type 2 civ.
Check out http://www.kublermdk.com/2017/04/10/zday-presentation-price-of-zero-transition-to-an-rbe/ for my Zday presentation about how we can transition to it.
Although the first two are mostly covered by The Zeitgeist Movement and NL/RBE, the three concepts together give a sign post of where we should be aiming, what we have to be aware of and how we can tackle the problems.
So, that’s been my long term vision for a long time. Help humanity become a Type 2, solar system wide civilisation, by helping transition to a new economic system.
It’s the details that are hard. I was already part of a failed attempt, Earth Communities and I see a number of other attempts which may or may not succeed.
As per my Price of Zero presentation on the transition to an RBE, but also other people’s similar transition concepts developed independently, the concept of building a new model to make the existing one obsolete is the core. We start small with a community, grow it out to a village, a town and eventually to a city, then spread and create more cities and the new concepts, tools, technology and behaviours are then disseminated and integrated.


But, setting up those initial communities needs money, land, people, connectivity and tools. My focus is on trying to gather some money, connect people up and help build some digital tools.
I’m aiming for $30 Million as a collective pool, with up to 49% of that coming from me. The rest likely to be crowd funded or at least collectively contributed. I suggest that because just the interest rates from that alone can be $1 million a year. Now, that much is stuff all. $1 Mill/yr covers normal costs for about 14 people. The full $30 mill gives maybe 400 person years. e.g 1 year with 400 people or 10 years with 40 people. But, it’s a good start. It’s a goal post to aim towards.


The digital tools is a newer goal, one I only started thinking about this year when part of an awesome online chat group full of like minded individuals who’ve also been thinking about all this stuff for years. With others focusing more on the land buy side of things and with discussions about multiple communities and some of the amazing digital tech, from Crypto to secure communications, resource allocation and the like, I realised there’s some tools I’d like to make to help these things happen. A suite of apps that’ll help geisty communities.
Joining the Evolution Dojo online chat was one of the unexpected highlights of the year. Despite meeting with Peter Joseph the founder of the Zeitgeist Movement and many other TZM activists, helping deploy the new website and a number of other things, it was that chat room which I feel will have the best long term overall impact on me.


Future: Gather Together and The story of New Eden
So next year I’m claiming for myself. There’s two projects I want to work on which are in line with my long term vision. The story of New Eden is the sci-Fi novel I’ve been working on in my precious spare time over the years. It started as a movie script, but someone else came up with almost the exact same premise and it’s since morphed into something quite different. It’s set 50yrs in the future and is the story of Eli, a 19yr old who thinks he’s living in the last city on Earth the only one surviving after a Nuclear Apocalypse only to go on a quest and find that he’s actually in a quarantined zone and the rest of the world runs as an abundance centered NL/RBE  based society.  He has to adjust to it, finds someone to love, uses neruoMods to talk with animals, nearly dies a few times, travels to the Moon and returns home as part of a hero’s journey.
I’ve been researching for years, writing up sections, parts, ideas, chapters, bios and am starting to finally allocate some good names to the characters. I feel like it’s been gestating inside me, but it’s been kicking and growing for long enough. It’s time to push this thing out into the world.


The other project is Gather Together. An event management platform aimed at grass roots organisations. I’ve been a part of plenty of orgs and to properly organise even just the monthly events properly usually too much work. Normally you create a Facebook Event or maybe send out an email. But to try and reach everyone you need to easily spread the message across multiple platforms. From Facebook and email to posting on the website, sending out sms reminders and email reminders. Then you’ve got to deal with last minute changes in the weather, venue or other such potential problems. Having a central system to coordinate all these and be able to schedule reminder messages, be alerted to potential issues (like more people have RSVP’d than the venue can hold) and be able to communicate quickly with people is just the first step.
Unfortunately I’ve only been able to spend about two weekends properly working on the platform, I only have an functioning API able to send out SMS’s, but not able to schedule messages. The VueJS based UI I created was a flop and I’m still trying to learn ReactJS, MobX and GraphQL, or I’ll just jump over to MeteorJs or AngularJs, but Meteor doesn’t scale as well as I’d like and Angular whilst being great for the main site, isn’t good for widgets and these days there’s some cool ReactJs capabilities for mobile, desktop and even Augmented Reality.


So the Story of New Eden novel hopefully inspires people towards a hopeful, abundance centered future, whilst the Gather Together lets people connect and might some day pay some bills.


Important but not urgent tasks [ 4 -> 7.5 ]
Most of the important but not yet urgent tasks around around prepping for the trip overseas.
From scanning a vast array of documents and sorting them through to making space in my storage unit for all the stuff I want to put in there whilst I’m gone to buying plane flights and accommodation.
There’s always the work on Gather Together, my novel and my other writing projects, including this post which niggle at me. But I’m in a far better position to work on them now than this time last year when my spare time was spent in Ark Survival Evolved taming or breeding dinos, helicopter harvesting (using two different accounts to collect resources faster), expanding my buildings or prepping for cave runs. I had a SHIT load of stuff in that game, but wasn’t getting far in IRL. Now, I’m able to actually get things done that should be done.


Learning and Education [ 8 -> 8 ]
I’m still learning at a prodigious rate thanks to audiobooks. I’m usually listening to 3-5 books a month, depending on their length, contents and how I’m doing. Listening when riding to work, washing the dishes, going for a run, driving the car. If my hands are busy but my mind is free, then I’m usually listening.
Although I’m now also allocating time to my thoughts. Empty spaces of time where I’m not listening to an audiobook, watching a movie or actively engaged in thinking about anything in particular. It helps me process things that have been on my mind and not feel like my head is full, but without just dumping the ideas out like meditation seems to do when it clears my mind.


Still, I need to focus a bit more on learning ReactJS and the like and maybe one day I’ll be able to implement a CRDT based collaborative editor similar to Google Docs.


Teaching, Training, Writing and sharing [ 4 -> 6 ]
I’ve been doing a lot more general writing. Or at least, I’m closer to what I consider the minimal amount I’d like to do. I still have a lot of thoughts that have partially turned into draft blog posts likely never to be completed, but I’m at least writing more, but nothing like what I would like to be doing. I’ve not even finished editing my own Zday presentation, let alone those from the No More Bad Investments Forum and I’ve got things to write up about the education 2.0 manifesto from years ago.


Willpower challenges and strength [ 7 -> 7 ]
The Keto diet and usual exercise things have been my main willpower challenges, not that I was that good at sticking to under 50g of carbs a day, but I definitely felt a change. The rest is about trying to get things done and keeping happy.
The next step is to be able to put off some projects (like editing videos) until after I’ve done a bit more work on my own (like the Gather Together API) so I feel accomplished enough to continue working on the other projects whilst having made at least some progress on my own.
That said, I’ve got a lot of things to finish off before my trip overseas, it’s going to be hard to fit it all in!


Food/Nutrition [ 5 -> 7 ]
I recently tried out a low carb, high fat Keto diet, and whilst the full strict diet isn’t for me, it’s helped me kick my carb and especially sugar heavy diet, so that’s a good start.
Written 30th Dec 2017 by Michael Kubler.


2017 – The Year that was PDF version