SciFi concepts – nanoBlood, the positioning problem and remote sensing

Note: These are Michael Kubler’s personal notes regarding technology likely to exist in the future. It’s written as a reference guide for The Book of New Eden SciFi novel set in a post-scarcity (Abundance Centered) society. Eli is the main character in the novel.

nanoBlood is considered a form of bodyMod but is important enough to get it’s own category. Also known as Nanobot blood it is a generic term for a few different things.

All versions come with automated spider, snake and other venom antidotes. Also helps power most of the other bodyMods and a form of nanobots in the blood is how the neuroMod slowly gets assembled in the brain over the months.
v1 : Probes which provide vast amount of info on your body. E.g 2 year warning of a heart attack, instant detection and treatment of cancer, radiation therapy (which is why Eli needs it), etc.
v2 : Red blood replacement which provides far better oxygenation ability and CO2 absorption so you can hold your breath for nearly an hour or can exercise much better. Also has amazing toxin scrubbing and processing.
v3 : The immune system is augmented. Nanobots acting as advanced White cells, T cells etc. Allows for not just a super immune system, but wirelessly transferred anti-viruses. Very quick detection of new infections, diseases and viruses and the ability to transmit an internal scan to be remotely processed and and anti-virus quickly developed and downloaded. Some neural implants can do a basic AV processing but it takes longer and takes up almost all of the processing power. Note: The nanobots are created in nanobot factories, usually embedded in the bone marrow and a couple of other points around the body, nanites (self-replicating nanobots) are NOT used due to their possible ability to go rouge.
V4 : There’s usually more than 30% of the blood as nanoblood. It also has DNA scanning facilities which, with the help of a machine that’s a bit like an MRI, allows all the cells in your body to have their DNA read.


Firstly, there’s the nanobot sensors. These augment normal human senses. There’s some which are for touch (pressure), temperature, velocity of travel, through to searching for chemical markers, DNA readers, and ones that search for viruses, bacteria and the like.

It started with people needing replacement blood and plasma due to health reasons and artificial versions being created to fill this need.
But now there’s also nano replacement blood cells. In some cases, like red blood cells they are capable of orders of magnitude better increases in oxygen carrying, although don’t always have the full range of other functions, such as certain types of waste removal. Things like CO2 and lactic acid are also considerably amped.
There’s the immune system nanoblood defence where you can wirelessly download a new biological anti-virus set from the Internet and within minutes of someone on another planet getting a new form of cold virus, you can be actively able to fight it off.

There’s a few systems at play here.

You’ve got the honeypot traps. Cells designed to look enticing to potential bacteria and viruses, but are specially crafted so their exact makeup is known and they are heavily monitored. If their structure is altered then the foreign invader is very easy to identify and then analyse. Often cloud computing is used to convert the analysis into a new anti-viral signature. Some of this analysis includes specialised simulations ensuring that the identifying markers detected, if searched for and attacked, won’t also destroy human tissue.
It’s a major concern that you’ll download an anti-virus update that’s (intentionally or not) malformed and causes the nanobot immune system to start attacking and liquefying your own body. A form of nano-ebola, except not contagious. When a new AV is applied only small amounts are released into a test environment to ensure the anti-virus isn’t dangerous to you.

It should be noted. There’s no use of nanites. Self-replicating nanobots are heavily controlled and very carefully researched, but aren’t allowed for general use due to their potential power of exponential destruction.
Instead there’s nanobot factories, usually installed in your bones which create the nanoblood. You can of course get injected with booster shots as well, as happens to Eli when trying to deal with the radiation exposure.

Other concerns include getting through the blood brain barrier, especially important during the development of a neuroMod.
This can be done through either a border controlled tunnel, which are specially reinforced nanobot tunnels which have a basic verification check at least at one end allowing valid nanobots to go through.
Some nanobots are big and powerful enough to push their way through the barrier as they want, usually with it closing up behind them.

Checkout the previous post about neuroMod levels for more info.

One of the main functions of nanoblood is for energy delivery to the neuroMod and various bodyMods. Want a light in your finger? You need to give it power. Want to send wireless communications to a drone or satellite? You need power for that too.
NanoBlood provides the needed energy. Sometimes just by transferring ATP around, but other times using more advanced and more nanobot optimised energy solutions.
Of course, there’s only so much energy the human body can generate and store in reserve. Most people have plenty of reserves of energy for burst tasks. But there’s of course options for consumption or injection of purified energy solutions. Or you can get a witricity (wireless electricity) recharging system bodyMod.

Dealing with toxins and unwanted chemicals. Improved clotting and dramatically improved healing. These are other things you can do with the right nanoBlood setup.

NanoBlood Sensors of interest:
CO2 – A nanoBot sensor network can be more accurate than the existing human body. Letting you see CO2 levels throughout your body.
O2 – Humans are susceptible to Oxygen deprivation because we only sense amounts of CO2 in the blood.
CO or Carbon Monoxide – Because the main problem is the binding of CO to the hemoglobin this isn’t an issue in artificial red blood cells
Lactic Acid – Created as a byproduct of using up energy, especially during exercise.
ATP – The energy powerhouse of the body.
DNA – Being able to read the DNA of your cells. This is a very tricky endeavour and produces vast amounts of data. A few GB per cell and trillions of cells. This is usually only done occasionally for active geneMod users or in special cases, like radiation poisoning or a geneMod gone wrong.
Pressure – This is often used as a way of helping identify the nearby other nano sensors. Sometimes used to augment the normal sensation of touch, but usually in places like the brain that we don’t normally feel it.
Temperature – This is often used as a way of helping identify the nearby other nano sensors. But of course knowing your actual body’s temperature compared to the perceived temperature can help a lot.
Hormones – Detecting the levels of your various hormones.
NeuroTransmitters – Especially useful in the brain of course. A particularly important issue is knowing when there’s an excess of used up neuroTransmitter chemicals and the brain needs to flush them out, aka, go to sleep. A process that can be vastly sped up.

There’s a whole slew of chemicals to track, both good and bad, plus specialised nanobots, like those searching for cancer cells. Also ones designed specifically for your skin.

Nanoblood sensor network – The positioning issue

So most nanoBlood sensors are very basic. They have the sensor package, be it chemical, pressure, touch or something more specialised, plus a transmitter. The use of millimetre ranged radio signals saves from filling up the bloodstream with physical chemical sensors like how the body generally works. It also allows much faster signals.
The sensors have very little power output and might only sample once a second or so, but they only need the signal to travel half a centimetre at most to the nearest relay station.
NanoBot relay stations are distributed around the body. You might have half a million sensors in your index finger and a few thousand relay bots there as well. They are bigger and don’t have sensors, just transmitters and receivers. They can also specifically relay messages to the sensors. The relays usually just forward data to their nearest neighbours until the data gets to a nearby accumulator. The accumulators are approximately the size of a pea and exist in your major bones. These are usually directly wired to each other and up to the main processors. The accumulators might receive a few billion points of data a minute and can store more than 4 hours of it.

The main processors are about the size of your finger and usually installed in your collar bone. The left and right processors work together in a redundant way, allowing for one to fail and the data to still be available. These main processors are where the powerhouse of work happens, from nanoBot signal processing to neuroMod task offloading.
Whilst they have basic 10m range wireless transceivers they are also used a lot in liaising with the shoulder mounted long range Maser system (microwave laser). The directed maser transmissions are how you can communicate with a drone or satellite flying many km away. The drones give off regular pulsed beacons indicating their location and your body fires directed radio waves at the location allowing far lower powered transmissions than would normally be needed over such ranges. Also increases privacy as it’s harder to snoop.

The main issue is that the nanobot sensors are dumb. They don’t know where they are. Their signals are usually very basic. The sensor reading (e.g current temperature, or a density of chemical reading), the sensor type, a unique Id and a counter. The counter automatically increments on each transmission.

Alone, this is hard to use. But the relays can specifically request readings from nearby neighbours and can map which other sensors are nearby. So you know physical proximity to the other sensors. The relay sensors have a timing counter with better than millisecond prevision but might not know the exact date and time, just an incrementing timer. This goes with the data packets. The accumulators do have actual dateTime information. So between all of that, you can know when a reading was taken.

Now you have a massive swath of uniqueIds and sensor readings and you now need to create a basic map.’

You need to work out where those sensors are and there’s a variety of techniques including:

Ping and traceroute tests. Similar to Internet servers, by asking the sensors, relays and accumulators to reply as soon as possible you can get a judge of distance based on how long it took to reply. You can also do trace routes, and work out how far away a sensor is based on the number of hops. Did the signal go through 30 or 5,000 relays and which ones?

Another method is to make known changes and look for which sensors reflect those changes. Warm up your hand, touch your face, sit on the ground, lay on the bed. Jump up and down. Drink water. All of these will light up different sensors. A big issue is that if you don’t have a high enough level neuroMod the visual and other neocortical information which gives great resolution to many of the senses isn’t available.

A 3rd option is external scanning.

The basic version of this is simply a video camera. This is used instead of your eyes for people who don’t have a high enough level integrated neuroMod.
But there’s actual energising scanners which are often integrated into hospital beds and in MRI looking machines. They work by externally triggering the relays or sensors in specific locations and mapping the responses. If it’s just a basic 2D scanner, like laying on the bed, then both visual information and things like rolling over and normal human motion can help increase the resolution of mapping. Having a scanner on the sides, giving two axis of transmission beams gives even greater fidelity, especially for your insides like sensors in your liver which are harder to know where they are inside of your body.

The encapsulating tube readers, can also enable a full body DNA scan if the right type of nanoBots are available. These are usually energised remotely and need to enter the cells. They can’t live between cells as many other sensors can.

Sometimes you just need a basic arm that goes part way over the body, like an arm chair rest. Or simply having a specialised doorframe is enough.

Obviously some nanosensors will be stuck between or inside of cells and are in somewhat static location, like by a bone, muscle or tendon. Others are in the blood stream and are moving targets. But the moving ones can be placed near known others so can be tracked.
There’s other issues, like the sensors break down and are replaced regularly by the nanobot factories. At a slow enough rate this isn’t a problem. You have a bunch of sensors in a known area and there’s some changes over time, but the positions are generally known. However the high metabolism mode that Eli enters causes a massive increase in nanobot turnover and he needs regular injections of new nanoBot sets, causing some of the mapping to become inaccurate, hence he has a basic external scanner built into his bed which works in with the video cameras in the room.


Remote Sensing / out of body experiences

Because the nanosensors can work wirelessly out of the body, you could feel information from the drops of blood nearby.

But it goes beyond that. The mapping of sensors and sensations can be done for objects out of your body. Say a door.
The door to my room feels the air conditioning on one side and the hot Vietnam heat on the other. It feels the wind, the cat walking past or occasionally trying to scratch it. The hinges know when they are dry and need oiling again. The handle knows when it’s being used and because of the force variations likely by who. The frame and door know when they are closed or open, but also if the house has shifted and the door doesn’t quite close properly anymore. But all this sensory information could be provided to you, once you’ve got a level 7 neuroMod. You could feel the door as if it is an extension of you.
You could then feel the sensations of a tree outside. Here the sensor dust network comes into play with GPS microdots could be used to calibrate the position of nearby smart dust, plus video cameras that track occasionally IR flashing dust sensors can provide high fidelity positioning.
So you could feel the warmth of the sun on the tree. The wind in the leaves. But also the ants crawling on it’s bark. The moisture in the air and the sap leaking from the wound when a bear swiped it during a fight. The tree obviously won :p haha

You could have sensors inside the tree with tell you about the root system and soil nutrients it’s uptaking. The moisture being raised through the trunk. The CO2 it’s pulling out of the atmosphere and using as a building block for creating more plant matter. How open the stomata are and how well it’s breathing.

That’s a single door or a single tree.
But you could also abstract up and ‘feel’ a whole house or even a whole forest. You would feel different information. The forest would include not just the trees but the deer and birds and ants and bugs and decomposing nature. The ecosystem.
The house would include bedrooms and toilets and electricity, Internet, power and water. With a whole neighbourhood being equivalent to the forest.

But how large can you go? Can you abstract to a whole country? To all the oceans? To a whole planet? To a collection of planets?

This is different to experiencing the life stream of a friend, be they human or animal. Those are based on the brains sensory perceptions and are neuroMod enabled streams which include conscious processing of the stimuli.

Whilst you could also have life streams of AI’s which are based on their own processing, and experiences (conscious or otherwise). Those aren’t the same as remote sensing which is about creating a new sensory system and new interpretation system. You’d need algorithms and AI to help with the mapping and data processing and making sense of the data. Turning it into sensations that we can relate to, or helping us develop new sensations we could never have imagined.

Mods and Apps – Science Fiction Concepts for SciFi writers

Science Fiction Concepts for SciFi writers.
Set 30-50yrs in the future.

Mods and Apps

* bioMods are for specific biological enhancements. These are usually a little bit more advanced than cosmetic surgery in 2017. The standard is the spinalTap mod, an enhanced spine and skeletal upgrade (usually including knees) that dramatically reduces skeletal issues. No more easily popped discs or dislocated knees or shoulders. Basically humans haven’t finished evolving to deal with walking upright and this helps complete that. It usually has a gMod (genetic engineering) component.

* geneMods are genetic engineering changes. Usually an injected retrovirus that rewrites your DNA. Think of it like CRISPR but working on all the cells of your body. Examples of this are the ability to change your skin pigmentation between normal shades over the course of a few days. e.g From 0 – Albino white to 5 – very black. Some changes are easy, like re-enabling a number of regrowth options already in our DNA. So you can cut your arm off and over the course of a few months it’ll grow back. Want to have your skin be pigmented to look like a purple dragon 🐉 and you are beyond normal gMods. But then, most people would just have an eInk tatto for that instead of changing those specific skin cells pigmentation.

* bodyMods are mainly physical implants of technology. Things like an LED light in your fingertip to having geiger counters or EM detectors built into your body as new senses. Want to read someone’s DNA by shaking their hand, or remove your stomach and just have print cartridge like nurtient containers that you replace? Sure. You can have the standard dental armour upgrade so you only have to brush your teeth once a year, or eInk skin that turns your body into basically a digital screen. You could become a chest breather, replacing your lungs with two holes just above your collarbone. Air goes in one hole and out the other, thus you no longer have a normal breathing motion. There’s also the usual assortment of faster legs and arms. Maybe your full cybernetic arm could have a nanofactory in it and you could leave a trail of smart dust.

* nanoBlood – Nanobot blood.
All versions come with automated spider, snake and other venom antidotes. Also helps power most of the other bodyMods and a form of nanobots in the blood is how the neuroMod slowly gets assembled in the brain over the months.
v1 : Probes which provide vast amount of info on your body. E.g 2 year warning of a heart attack, instant detection and treatment of cancer, radiation therapy (which is why Eli needs it), etc.
v2 : Red blood replacement which provides far better oxygenation ability and CO2 absorption so you can hold your breath for nearly an hour or can exercise much better. Also has amazing toxin scrubbing and processing.
v3 : The immune system is augmented. Nanobots acting as advanced White cells, T cells etc. Allows for not just a super immune system, but wirelessly transferred anti-viruses. Very quick detection of new infections, diseases and viruses and the ability to transmit an internal scan to be remotely processed and and anti-virus quickly developed and downloaded. Some neural implants can do a basic AV processing but it takes longer and takes up almost all of the processing power. Note: The nanobots are created in nanobot factories, usually embedded in the bone marrow and a couple of other points around the body, nanites (self-replicating nanobots) are NOT used due to their possible ability to go rouge.
V4 : There’s usually more than 30% of the blood as nanoblood. It also has DNA scanning facilities which, with the help of a machine that’s a bit like an MRI, allows all the cells in your body to have their DNA read.

* neuroMod – The nerual implant. See also the 10 levels of neuroMod integration. Most of the apps are aimed at level 6 or 7 levels of integration.

* bodyApps (as opposed to bodyMods) are those which use the neuroMod to alter how you move, think or interact with implants and body/bioMods the vast majority need a neuroMod and need special permissions. Kinda like when you authorise an app in Google Play, but with a lot more info about what it will actually do and not do. Especially when an app is initiating activities for the first time, like LieToMe changing the way your eyes scan someones face or the complex, stocastic movements of SpeakEazy changing how your facial muscles work to make it harder for people to detect your lies. bodyApps are more about movement and control. The Posture Pedic bodyApp is installed by default (for those with the Trev special set of mods), and goes well with the spinalTap bioMod. It makes you sit up straight, keep your head back (not in the forward head posture) etc.. Works to reduce most muscle fatigue and skeletal tension. There’s versions for running better, meditation, various martial arts. This is different to the “I know Kung-fu” part of The Matrix. It’s a neuroMod running to alter your motion control. It changes how you’d try to move.

* visionApps – As with other apps, this uses the neuroMod. You can get Public, Private and shared (group) augmented overlays. But advanced vision mods can tap into how your visual processing neurons work. You’ve got to be careful when you start changing how you perceive straight lines or other core things. It’s very easy to go into dangerous trips with reality distortions few drugs can even get close to. Whilst these days they are automatically detected and the changes reset, similar to going into a ‘preview’ mode of new monitor settings and it auto-reverting, it can be possible to get yourself stuck in a mode where you simply can’t navigate to cancel such a mode and can cause long term pshycosis. Note: You can get eyeMods which are specific eye replacements. This is what the Death Squad have. Their eyes glow red because it’s creating infrared light to help them see at night.

Some Cool Apps

NeuroTelepathy or usually just known as Telepathy is the app which lets you talk with other neruoModded humans but also with AI and modded Animals.

There’s various levels of communication. From the equivalent to IRC chat or normal speech. Very carefully controlled to pre-recorded and edited thoughts with concepts, visuals, feelings and the like to a full stream of consciousness. As you feel it or think it with only basic filtering, e.g remove most background body sensory info, anything sexual or socially inappropriate.

You can also send concepthesis concepts and more.
The Mind Meld app is the NeuroTelepathy app with a 2 (or sometimes more) way merge and no filters. Obviously named after a Vulcan mind meld.

Penfield – Emotional control
Psychology and personal mental control beyond any advanced meditator. Like the Penfield Organ in Do Androids Dream of Electric Sheep.
Gives you the ability to control your emotional state and even your thoughts with great precision. If you want.
Most people who use Dataism as their new religion give this control over to AI algorithms which can optimise their life for being the most rewarding and fullfilling, with lots of time in the state of flow. Thus providing great satisfaction beyond just being stuck in the ‘Happiness’ setting.

BabelFish – Universal translator
A core app.
Can work on both audio and visual inputs, acting like a super better version of Google Translate’s camera mode, or the conversation equivalent. Often the main way you know it’s even working is because the audio/video signal has a ‘translated’ tag added and you can toggle the translation layer and see the underlying signal before translation.

It also lets you speak or write nearly any language, if your vocal cords or hand is capable of the output. Often computer to computer digital speech and text isn’t really replicable by humans without speakerbox bodyMods (speakers instead of voice boxes) or text printing capabilities… Or usually just a digital display like eInk skin.

Lie To Me – Lie Detector
This works through all the usual input systems with some degree of accuracy against those not actively blocking it.
It hooks into the brains existing system 1 Fast triggers, but is also able to analyse with a lot more skill and accuracy than most people’s innate lie detection. It usually help focus the fovea on the person of interests likely telltale signs, mostly on their face, from their eye movements to minor muscle twitches. Because it’ll urge the user to look at certain points (they’ll want to do so), it can often be detected by others, so more stealthy modes are available, or usually people use an alternative visual input, like a nearby drone.
It uses the latest in nerualnet and other AI analysis to make you instantly a better lie detector than any unmodded person on the planet.

SpeakEasy – Lie to people / Pokerface
Usually the SpeakEazy counter app is also installed by people. It started by detecting the other persons obvious signs of using the LieToMe mod.
It then tried to intercept your tells and stop those micro-expressions. This dampening of your tells works well against muggles (unmodded people, but the weird times of dead movements then became a tell themselves.

Because such dampening is detectable the normal mode when talking to someone else that is also modded / with a Wizard Hat does the equivalent of creating white noise but with your face muscles, eyes and nearly everything else. You’ll have a sea of random micro-twitches, erratic heart rate and seem jittery to the system in a way that masks your emotions and signals by looking like you are changing emotions and sending lies and truth signals in such rapid succession it becomes meaningless.
Think of the big face of The Matrix core that Neo talks to at the end of the 3rd Matrix film, or other such particle, water or electrical based characters. There’s general form there but it’s annoying to view for a long time. Negotiators refuse to talk to people with this mode on, but those doing the last remaining bits of capitalist politics (in the non-RBE cities that aren’t New Eden) have this mode on by default.

People using the noise version often refer to it as PolkaFace, a play on poker face.

Look to the Stars – Where Am I ( Night Sky )
Look into the night sky and based on the stars know your position in the solar system (not just lat/long on Earth or Mars but anywhere out to the Oort cloud and within a +-300yr time range). The full version attempts to work out your position in the Galaxy over a +-5,000yr range and within the Galaxy. Although the full version needs a larger download and is much faster processing if you have a smart watch or even a space ship to help with the processing.

See that Key
Look at a key or any object and once you’ve seen it from enough sides you can have a 3D model generated and be able to 3D print it.
Works great when you’ve got a large drone nearby that can 3D print it for you, or are simply near a city which has fast 3D printers.

Morse code reader
Once installed this runs in the background looking for morse code signals, especially audio beeps and light flashes and will detect and convert. This allows for conversion of QS and other morse codes into their general understanding.
It will also detect binary, and other basic protocols by default.
This is often hooked up to people’s flash light fingers and it can be fun to see two kids running around talking via their fingers flashing, although there’s more advanced transmission protocols used for tapping and vibrations, so someone just tapping on a table or tapping on their friends hand whilst holding hands, or leg when cuddling can be a form of talking behind other people’s backs.
As with many of these things, there’s an ever evolving war of encryption and decryption, kids using a greater variety of ciphers, although the generic decryption tools and AI decryptions make basic changes to the ciphers easier and easier to break. Although the normal child / teacher empowerment dynamic is a lot more like that of a respected mentor so is rather different to normal school setups.

The voiceBox bioMod, allowing for a built in speaker, whilst not nearly as popular as light fingers is often used to transmit on high frequencies most humans can’t speak at and usually can’t hear without hearing bioMods but is often used by cohesive groups that are modded. But nearly all people with such bioMods also have the neuralMod implant so just communicate via standard encrypted wireless telepathy.

The development of the morse code app is often used as a standard example of app development of its kind. The first developer thought it would be cool to read morse code like they do in all the movies, without actually having to learn morse code so worked out how to hook into the various pattern recognition systems of the brain and neuroMod. It’s usually a combination of system 0 (basic visual neural detection of lines, shapes and time repeating patterns) plus the first order, system 1 (fast) processes which usually detects the flashes. By buffering a few seconds of input on a background thread that’s analysing the signals it can try and find anything that looks like morse code. Once detected it can analyse the input with greater focus (be it visual, audio, kinesetic or even smell) do more specific optimisations around the particular signals ‘hand’ (timing patterns, be they miliseconds in duration or minutes) and do more noise reduction, context analysis (does QSR mean a general morse code short hand, or just the 3 letters) etc…
Special patterns, like SOS are extra highlighted.

It started as more of a gimmic than anything, but others took the code and made it easily extensible so new filters could be added, new algorithms detected. More hooks into different areas, like concepthesis messaging (concepts and learnings) available as sound or touch for those who don’t have normal wireless internet to the brain enabled, usually enabled at museums or art galleries as a form of disabled / fallback support.
It can even detect messaging shown via agumented reality or as simulated external sensory input (sound, touch, vibration, smell, test) during immersive VR sessions, etc..

Then there’s a whole host of different detection algorithms, hooks into apps like the more powerful generic decryption systems. These can offload processing amongst a neruoMesh (group of other people with neruoMods), allow for joint detection of inputs (e.g thousands of people worldwide getting small pieces of the puzzle) and of course the ability to scan over very long time periods, like years not seconds, but most of those are all rarely used addons mostly done for fun, although some cool detection of earthquakes by peope who were meditating was possible on a neuroMesh. On the direction vibration sense some people will augment their heart beats to act as a morse code, detectable by their partner just by holding their hands when fairly quiet. Although heart rate detection of others is fairly easy at a greater distance with some of the infrared eye bioMods or tuned electricial EM field detectors, be they internal to the person (thus requiring a lot of work to cancel out the detection of their own nervous system’s EM activity) or external, like built into the walls and ceilings, usually trying to focus on bioelectric signals not normal wireless transmissions.

4 Versions of the Olympics
#1 – The existing normal Olympics. Healthy and unaltered. No drugs, nothing. With #1D being the disabled Olympics, although that’s a LOT rarer given most people opt for replacement grown limbs or even end up in the #3 tech enhanced version with bionic limbs that were better than what they used to have. Actually it was because of dealing with disabilities we developed such good bionics.
#2 – Drug enhanced – Humans using drugs and other general enhancements, but nothing we’d consider active or passive technology.
#3 – Tech enhanced / Cybernetic – People with nanoblood, implants, bionic limbs and many of those beyond the Kubler cascade
#4 – An android only version – Mostly for robots only. Although there’s occasionally matches between humans and Androids, e.g the Robocup Challenge style Soccer against humans and Robots. Although usually only full cybernetic enhanced people have any hope against even reasonably well optimised robots. Often the Androids will zoom on their equivalent of roller skates instead of pumping their legs, or will have very different forms of locomotion and there’s some surprising ways that the genetic algorithms for say long distance Javelin or high jump can create amazing robots. Few people even consider it the same sport, but then, the androids didn’t really see much to sport. The more interesting robots are the ones that attempt to be able to beat the best enhanced human in each area but without replacing parts. So being better than the best human at not just throwing and jumping but shooting and running and playing sports and swimming. To be good at all means some very interesting trade-offs and engineering feats.

Kubler Cascade

The Kubler Cascade is when there is some sort of driving force in a persons life which makes them want to be more augmented, e.g wanting to be the best athlete or fastest tech head they can be and in doing so quickly jump from the generally acceptable 40% range to the 80-90% level of augmentation. (I’m assuming that) It’s harder to go beyond that level due to technological difficulties and it’s usually easier to jump from that level to being digitised and having your consciousness running completely inside a computer. However, usually it’s a different force or pressure to make someone want to be digitised.

The other point to highlight is the fact that most people are fine with the up to 40% augmentation. This usually involves neural implants, basic genetic corrections, enough nanobots to ensure they are healthy and have an abundance of sensors to know if there’s anything wrong with them.
But when you apply a competitive force and you start replacing your legs with fully technical ones which allow you to run over 100km/h to do that with anything more than a quick burst you need to feed them with enough energy, so you need to up your nanoblood levels, replace your heart, become a chest breather and replace your stomach.

Obviously people augmented people in the tech enhanced version of the Olympics are the most susceptible to this.

Often the Kubler cascade is also defined as the point before passing you are still considered a normal homo-sapiens, but after which you are now classified as something else. Some people use the popularised term homo-deus or God like human. Others use De novo sapiens or just novo-sapiens meaning anew humans. Or the more pretentious French version Nouveau-Sapiens.

10 Levels of neuroMod Integration
There’s actually 7 main levels of neuroMod integration.
Once you go past around level 8.4 of integration there’s issues with being able to reverse the process and remove the neuroMod. Too much of your brain has been replaced. Hence most people are between level 7 and level 8.2 integration.

Level 0 – Nothing. You are an unmodded. A muggle.
Level 1 – The seed has been injected and there’s a nanofactory running but it hasn’t connected with anything.
Symptoms: Some light tingling at the site of the injection and maybe a tiny bit of discomfort where the neuroMod factory implant has an injection site poking through the blood/brain barrier.

Level 2 – The neuroMod is just beginning to integrate with some nearby neurons. It’s ensuring compatibility, ensuring the body doesn’t reject it.
Symptoms: During this time there might be small, barely noticeable glitches. Things like unexpected memories. But it can sometimes also trigger an out of body experience.

Level 3 – The neuroMod now has communication with the Internet via the wireless chips in your collarbone. It’s also creating main pathways to the important parts of your brain, the motor cortex, amygdala, frontal cortex all along between your optical nerves and visual cortex and it is generating more of the main highway infrastructure. The main tree branches.
Symptoms: There’s the motor cortex and sensory cortex. You could have issues where the sensation is there but you can’t move or vice versa. This would cause some weird lack of Proprioception, the feedback loop of going to move something and then feeling the texture, weight and other senses that let you know about that object.
The weirdest is when it mutes the channel about reliability / probability of sensory input, everything feels uncertain. You don’t know the certainty of what you are perceiving, it can get very weird.

Level 4 – Initial visual integration. You’ve got visual overlays as the neuroMod is now interrupting the optical nerve.
Symptoms: At first you get vision that seems empty (no signal) and the brain fills it up with imagined creatures or shapes ( like the visual aura I get before having a migraine or Oliver Sack’s talk about Charles Bonnet syndrome ), or distorted and weird. Although by the end you have Augmented Reality and basic close your eyes Virtual Reality. You interact with the overlay using basic eye tracking gestures (seeing as that’s controlled by the lower level lizard brain not your motor cortex… which is cool).
Also, the midbrain controls your voluntary eye movements. What happens when it feels like the neuromod forces your eyes to view something? I’m guessing there has to be a thought like request or recommendation before taking on an eye movement without it feels weird, but it’d be simple and subtle. Could be a weird sensation to make use of esp during AR gaming.

Level 5 – Full Motor cortex, and emotional integration. It can now change the way you move and how you feel.
Initial sensory based Life Stream recording happens here as all the main sensory input is being recorded and the global workspace (conscious thought) has been decoded to enough of an extent.
Symptoms: Sometimes you’ll have weird twitches as it triggers some muscle groups and sudden emotional outbursts or numbness. You’ll also likely get some brief out of body experiences.

Level 6 – Thought manipulation. No longer just reading your thoughts, able to change and manipulate them. Able to start mapping your memories.
Symptoms: <Insert>

Level 7 – FULL Integration. You can have Full Virtual Reality (FVR), Matrix style. An immersion so strong that without certain restraints and your memories you might not know you are in a simulation.
By now you can start to think about 3x faster than normal and sleep only 2.5hrs a night. Your a Wizard Harry!
Symptoms: <Insert>

Advanced neuroMod Levels
Level 8 – Technomancy. This is for the speed freaks and involves using the neuroMod along with a whole bunch of enhancements to increase the speed of your thought by another 5x, so 15x faster than normal (or 23ms instead of 350ms response times). With certain very special capabilities, like bullet time, being done as ASIC like dedicated hardware in order to get something decent out of 350ms of bullet flight time you’d want 5ms time slices for 70 frames of action and reaction, so it’d feel like about 3s.
This is the level that the technotopians are required to be at, but also the negotiators as they have the Bullet Time mod.
Symptoms: Your able to process things so fast that conversations with people of a lower speed level aren’t possible in realtime thought. You have to buffer the input and output. You also find that Internet latencies become noticeable so huddling near data centers or physically being near the groups of people you are telepathically talking with becomes important. Also, the amount you sleep will have been drastically reduced to about 30mins a day due to advanced neurochemical cleansing.

Level 9 – ? Are you human, cyborg or what?

Level 10 – You’ve not longer got a biological brain, it’s completely replaced. This hasn’t been achieved… Yet. But people are working on level 9 and 10.

Level 10+ would be a fully digitised consciousness.

Initially complied by Michael Kubler on the 19th of October 2018 from a variety of his ideas for the Book of New Eden novel. Inspired to post this by Geoff Kwitko’s Live Stream talking about his interest in SciFi.

First self-closing Fridges, eventually self refilling fridges.

Why aren’t self-closing fridge doors the default?

At work there’s been a couple of instances of the fridge door being left open overnight.

It’s obviously an accident, but the fridge defrosts and there’s a big puddle of water left, one of the smaller bar fridges opens onto a newly carpeted area, not good. The food also spoils and people’s lunches has to be thrown out, etc..

Because everyone was talking about ensuring that everyone else make sure the fridge door is closed my initial thought was that the fridges should have an alarm system so that they beep when left open for too long. Newer fridges do.

But something I haven’t seen is self-closing fridge doors. We have the technology to automatically close doors and we have fridge doors, but somehow the two haven’t been merged and made a default, like air conditioning in a car is default these days. Surely the cost increase of making that the default is far less than all the spoiled food and cleaning up that happens.

The Future : Self-refilling fridges.

As part of The Book of New Eden, the futuristic sci-fi novel set in a post-scarcity world, I’m interested in what an ideal fridge would be like. Actually, it’s about the entire system. From growing food to transporting it and final usage. When you analyse the system you realise that using 5-10KJ of oil energy per KJ of food consumed is incredibly wasteful. From the fertiliser to pesticides to diesel powered tractors, refrigerated trucks and plastic packaging, to shipping food thousands of km and often between multiple countries, until it ends up in distribution centers, which send the food to supermarkets, which most people drive to, go into, take the food off the shelves, put them into a trolley, take them out to scan them, put them into plastic bags, carry them to their car, drive home, carry them into the kitchen, put them into the fridge or pantry and a few days or weeks later finally use the food to cook. That’s a lot of handling. Obviously the entire system needs a rethink.

In the future I would expect the food to be sent straight to the fridge. You could pull out a tray of tomatoes (or a handful of them from the bio-gel fridge) and before you’ve even finished cutting those up the tray is full of tomatoes again. No going to the store, no unpacking the shopping, seamless refilling of the fridge.

Of course the system behind that would be very different to what we currently have. The tomatoes would be grown in vertical farms which allow for a controlled environment so no pesticides are needed. The almost completely automated growing and harvesting system would pick the tomatoes when they are ripe, or needed. They could be picked directly off the plant and washed as it is transported through a set of maglev tunnels leading into your house (or apartment) and into your fridge. The transport network could also be used for lots of other items. Of course if you ordered a new picture frame it wouldn’t rock up inside your fridge but into somewhere that a robot could then put it on your wall for you.

I haven’t mentioned paying for the food in this future as it’s also a post-monetary world, but the systems behind that is a much bigger topic.


In talking about fridge designs, here’s a variety of interesting ideas people have come up with :


Note : I’m not involved with in any way, they just had some cool articles about fridges which I liked.

  1. – This seems to be the most high tech, disruptive idea of a fridge. You don’t have a door, instead there’s a gel and you pull items through it.
  2. – Put the heat exchanger on the top and turn it into a warm plate.
  3. – A solar powered (evaporation based, not electric) fridge for the developing world, created by 21yr old Emily Cummins.
  4. – To reduce food wastage because you didn’t see the food at the back, simply have a well labeled box that the most perishable items go into.
  5. – Need to have a bigger fridge but don’t want to pay the electricity costs? How about burying one in the ground. The groundfridge.
  6. – Stackable compartments in a fridge, making it easier for share houses to allocate fridge area and increase it as needed.
  7. – A list of various fridge ideas. Some aren’t all that amazing.

Listening to the Birds, Ants and augmented Cats.

I’ve been listening to a lot of audiobooks on the future of technology, mainly because I want to write a book set about 50yrs in the future. The main three which stick in my mind are Ray Kurzweil’s “The Singularity Is Near: When Humans Transcend Biology“, the Metatropolis series and Abundance: The Future Is Better Than You Think by Peter H. Diamandis and Steven Kotler.
The current working title is “The Book of New Eden” and is based around Eli leaving a city which is effectively still stuck with 2015 technology and culture and finding a whole post-scarcity, post-monetary society. I mention 50yrs in the future, but more I think about it the more I want to set it just before the Technological Singularity which this site suggests will be between 2060 and 2075. Given I’ve been wanting to write the book (originally a basic movie script) for over 5yrs and I’ve only got an outline, 2 chapters written and some stubs for some others, but what is probably 100 pages worth of notes and ideas, it feels like I’ll probably need a few more years before I can even finish the book and the cool ideas I have about what the future could be will already have come to fruition. With that in mind, I’m sharing an enhanced version of the notes I just wrote about neural augmented animals :

Listening to the Birds, Ants and augmented Cats.

A quick bit of backstory. I’m expecting that humans will have mastered genetic engineering, nanotechnology and will be very close to creating Artificial Consciousness. As part of this tech most humans will have nanobots in their blood keeping them healthy but also neural implants in their brain allowing people to communicate in thought (technological telepathy if you will) and even remotely control robotic arms, other people and virtual characters. But, the interesting thought I had is applying the technology to animals, not just humans.
Imagine (technologically) telepathically listening to birds which have had implants in their brains. I image there would be two main types of implants. Read only and augmented.
The read only version would only attempt to read the neural activity, plus hormone levels and maybe general general information like heart rate.Especially with basic animals like ants these will be of such a low level intelligence that what comes across isn’t thoughts but what is effectively emotions. Hopefully it can be emotions with intent. For example, hunger, but aimed at an area they are searching, or being horny and showing off to a specific mate.
As with the later books in the Metatropolis series there’s likely to be AI created that represents groups of animals and converts their merged actions and neural processes (although implants won’t be directly necessary, but are likely to allow a higher level of fidelity), into in intents and resource requests.
Whilst some birds might have what are close to thought processes, the big difference will be in the birds that have the augmented (read/write) implants, those that actually augment the birds capacity and allow them to tap into neural enhancements and the Internet. At a guess, their neural capacities are likely to be augmentable up (unaugmented) human standards. We’d be able to telepathically (and probably even verbally) converse with them. So the augmented versions would be more like Mr Peabody and Sherman, whilst the read only version would be more like Steve from Cloudy With a Chance of Meatballs.
A lot of the initial scaffolding that enables those intelligence enhancements will be based on human intelligence and AI filling the gaps but hopefully over time there will be entire flocks of birds and other animals which can realise their own versions of enhanced intelligence…. Ohh man, what will cats be like with enhanced brains and the ability to telepathically talk to humans? Will humans and cats happily go for walks around the block? Will augmented cats change their behaviour and become more sociable with other cats? Will they tweet about their homes, or will they start doing science and celebrity work?
Can you imagine a cat and a bird coming together to work on a science experiment? They could be telepathically controlling robotic arms and creating nano assembled materials in ways we might never have thought of.
Would dogs get pissed off at humans trying to train and discipline them or would they be considered the fun, happy go lucky species? Well… Except the bulldogs and guard dogs of this world.
I’m guessing there’s going to be completely different ways of thinking, entire Buddhism like practices which will be invented by augmented animals. Just as we’d managed to get over homophobia, racism and nationalism, expanding empathy to cover all the humans on Earth (and the Moon, Mars and outer space), we’d have to be dealing with speciesism. “Ohh you don’t count, your only a sparrow“.
I wonder how well such animals would pass the turing test? Could you have telepathic conversations with a group of birds, cats, dogs, apes, whales, Artificial Intelligence and barely realise you haven’t talked to a human in days?
Also, in the novel I’m expecting we’d have the technology to custom engineer animals, so we’d have things like tiny dinosaurs. But what animals would other animals make? What would their idea of controlled evolution be?
!Tiny Brontosauruses
Just like there’s different cultures there will be an explosion of people trying to categorise animals. Generally peaceful animals, angry ones, ones we’ve raised, wild ones, those angry at the slaughter, those which used to be extinct.
I like thinking more in terms of a spectrum instead of binary, in holistic, interconnected ways instead of specialised separate. I’m interested in knowing how the different evolutionary baggage would affect different animals thoughts and concerns. But then, I’m also wondering, would birds be great at flying planes and space ships? I’m not sure.
It doesn’t matter too much because by this time all transport will be automated (although optional manual controls in rural areas) and the majority of stuff sent via maglev. However if the animals can comprehend virtual worlds then they could certainly fly planes in them if they wanted :)
After reading Oli Young’s tweetThe idea that my kids are going to grow up with marriage equality being boringly normal makes me smile. A lot.
I’m now wondering how accepting people will be of penguins marrying other penguins for a year, or human and animal relations?
Also, will animals create their own trends and cultures? Will the term Doggy style take on a whole new meaning? So many thoughts!