SciFi concepts – nanoBlood, the positioning problem and remote sensing

Note: These are Michael Kubler’s personal notes regarding technology likely to exist in the future. It’s written as a reference guide for The Book of New Eden SciFi novel set in a post-scarcity (Abundance Centered) society. Eli is the main character in the novel.

nanoBlood is considered a form of bodyMod but is important enough to get it’s own category. Also known as Nanobot blood it is a generic term for a few different things.

All versions come with automated spider, snake and other venom antidotes. Also helps power most of the other bodyMods and a form of nanobots in the blood is how the neuroMod slowly gets assembled in the brain over the months.
v1 : Probes which provide vast amount of info on your body. E.g 2 year warning of a heart attack, instant detection and treatment of cancer, radiation therapy (which is why Eli needs it), etc.
v2 : Red blood replacement which provides far better oxygenation ability and CO2 absorption so you can hold your breath for nearly an hour or can exercise much better. Also has amazing toxin scrubbing and processing.
v3 : The immune system is augmented. Nanobots acting as advanced White cells, T cells etc. Allows for not just a super immune system, but wirelessly transferred anti-viruses. Very quick detection of new infections, diseases and viruses and the ability to transmit an internal scan to be remotely processed and and anti-virus quickly developed and downloaded. Some neural implants can do a basic AV processing but it takes longer and takes up almost all of the processing power. Note: The nanobots are created in nanobot factories, usually embedded in the bone marrow and a couple of other points around the body, nanites (self-replicating nanobots) are NOT used due to their possible ability to go rouge.
V4 : There’s usually more than 30% of the blood as nanoblood. It also has DNA scanning facilities which, with the help of a machine that’s a bit like an MRI, allows all the cells in your body to have their DNA read.

 

Firstly, there’s the nanobot sensors. These augment normal human senses. There’s some which are for touch (pressure), temperature, velocity of travel, through to searching for chemical markers, DNA readers, and ones that search for viruses, bacteria and the like.

It started with people needing replacement blood and plasma due to health reasons and artificial versions being created to fill this need.
But now there’s also nano replacement blood cells. In some cases, like red blood cells they are capable of orders of magnitude better increases in oxygen carrying, although don’t always have the full range of other functions, such as certain types of waste removal. Things like CO2 and lactic acid are also considerably amped.
There’s the immune system nanoblood defence where you can wirelessly download a new biological anti-virus set from the Internet and within minutes of someone on another planet getting a new form of cold virus, you can be actively able to fight it off.

There’s a few systems at play here.

You’ve got the honeypot traps. Cells designed to look enticing to potential bacteria and viruses, but are specially crafted so their exact makeup is known and they are heavily monitored. If their structure is altered then the foreign invader is very easy to identify and then analyse. Often cloud computing is used to convert the analysis into a new anti-viral signature. Some of this analysis includes specialised simulations ensuring that the identifying markers detected, if searched for and attacked, won’t also destroy human tissue.
It’s a major concern that you’ll download an anti-virus update that’s (intentionally or not) malformed and causes the nanobot immune system to start attacking and liquefying your own body. A form of nano-ebola, except not contagious. When a new AV is applied only small amounts are released into a test environment to ensure the anti-virus isn’t dangerous to you.

It should be noted. There’s no use of nanites. Self-replicating nanobots are heavily controlled and very carefully researched, but aren’t allowed for general use due to their potential power of exponential destruction.
Instead there’s nanobot factories, usually installed in your bones which create the nanoblood. You can of course get injected with booster shots as well, as happens to Eli when trying to deal with the radiation exposure.

Other concerns include getting through the blood brain barrier, especially important during the development of a neuroMod.
This can be done through either a border controlled tunnel, which are specially reinforced nanobot tunnels which have a basic verification check at least at one end allowing valid nanobots to go through.
Some nanobots are big and powerful enough to push their way through the barrier as they want, usually with it closing up behind them.

Checkout the previous post about neuroMod levels for more info.

One of the main functions of nanoblood is for energy delivery to the neuroMod and various bodyMods. Want a light in your finger? You need to give it power. Want to send wireless communications to a drone or satellite? You need power for that too.
NanoBlood provides the needed energy. Sometimes just by transferring ATP around, but other times using more advanced and more nanobot optimised energy solutions.
Of course, there’s only so much energy the human body can generate and store in reserve. Most people have plenty of reserves of energy for burst tasks. But there’s of course options for consumption or injection of purified energy solutions. Or you can get a witricity (wireless electricity) recharging system bodyMod.

Dealing with toxins and unwanted chemicals. Improved clotting and dramatically improved healing. These are other things you can do with the right nanoBlood setup.

NanoBlood Sensors of interest:
CO2 – A nanoBot sensor network can be more accurate than the existing human body. Letting you see CO2 levels throughout your body.
O2 – Humans are susceptible to Oxygen deprivation because we only sense amounts of CO2 in the blood.
CO or Carbon Monoxide – Because the main problem is the binding of CO to the hemoglobin this isn’t an issue in artificial red blood cells https://en.wikipedia.org/wiki/Carbon_monoxide_poisoning
Lactic Acid – Created as a byproduct of using up energy, especially during exercise.
ATP – The energy powerhouse of the body.
DNA – Being able to read the DNA of your cells. This is a very tricky endeavour and produces vast amounts of data. A few GB per cell and trillions of cells. This is usually only done occasionally for active geneMod users or in special cases, like radiation poisoning or a geneMod gone wrong.
Pressure – This is often used as a way of helping identify the nearby other nano sensors. Sometimes used to augment the normal sensation of touch, but usually in places like the brain that we don’t normally feel it.
Temperature – This is often used as a way of helping identify the nearby other nano sensors. But of course knowing your actual body’s temperature compared to the perceived temperature can help a lot.
Hormones – Detecting the levels of your various hormones.
NeuroTransmitters – Especially useful in the brain of course. A particularly important issue is knowing when there’s an excess of used up neuroTransmitter chemicals and the brain needs to flush them out, aka, go to sleep. A process that can be vastly sped up.

There’s a whole slew of chemicals to track, both good and bad, plus specialised nanobots, like those searching for cancer cells. Also ones designed specifically for your skin.

Nanoblood sensor network – The positioning issue

So most nanoBlood sensors are very basic. They have the sensor package, be it chemical, pressure, touch or something more specialised, plus a transmitter. The use of millimetre ranged radio signals saves from filling up the bloodstream with physical chemical sensors like how the body generally works. It also allows much faster signals.
The sensors have very little power output and might only sample once a second or so, but they only need the signal to travel half a centimetre at most to the nearest relay station.
NanoBot relay stations are distributed around the body. You might have half a million sensors in your index finger and a few thousand relay bots there as well. They are bigger and don’t have sensors, just transmitters and receivers. They can also specifically relay messages to the sensors. The relays usually just forward data to their nearest neighbours until the data gets to a nearby accumulator. The accumulators are approximately the size of a pea and exist in your major bones. These are usually directly wired to each other and up to the main processors. The accumulators might receive a few billion points of data a minute and can store more than 4 hours of it.

The main processors are about the size of your finger and usually installed in your collar bone. The left and right processors work together in a redundant way, allowing for one to fail and the data to still be available. These main processors are where the powerhouse of work happens, from nanoBot signal processing to neuroMod task offloading.
Whilst they have basic 10m range wireless transceivers they are also used a lot in liaising with the shoulder mounted long range Maser system (microwave laser). The directed maser transmissions are how you can communicate with a drone or satellite flying many km away. The drones give off regular pulsed beacons indicating their location and your body fires directed radio waves at the location allowing far lower powered transmissions than would normally be needed over such ranges. Also increases privacy as it’s harder to snoop.

The main issue is that the nanobot sensors are dumb. They don’t know where they are. Their signals are usually very basic. The sensor reading (e.g current temperature, or a density of chemical reading), the sensor type, a unique Id and a counter. The counter automatically increments on each transmission.

Alone, this is hard to use. But the relays can specifically request readings from nearby neighbours and can map which other sensors are nearby. So you know physical proximity to the other sensors. The relay sensors have a timing counter with better than millisecond prevision but might not know the exact date and time, just an incrementing timer. This goes with the data packets. The accumulators do have actual dateTime information. So between all of that, you can know when a reading was taken.

Now you have a massive swath of uniqueIds and sensor readings and you now need to create a basic map.’

You need to work out where those sensors are and there’s a variety of techniques including:

Ping and traceroute tests. Similar to Internet servers, by asking the sensors, relays and accumulators to reply as soon as possible you can get a judge of distance based on how long it took to reply. You can also do trace routes, and work out how far away a sensor is based on the number of hops. Did the signal go through 30 or 5,000 relays and which ones?

Another method is to make known changes and look for which sensors reflect those changes. Warm up your hand, touch your face, sit on the ground, lay on the bed. Jump up and down. Drink water. All of these will light up different sensors. A big issue is that if you don’t have a high enough level neuroMod the visual and other neocortical information which gives great resolution to many of the senses isn’t available.

A 3rd option is external scanning.

The basic version of this is simply a video camera. This is used instead of your eyes for people who don’t have a high enough level integrated neuroMod.
But there’s actual energising scanners which are often integrated into hospital beds and in MRI looking machines. They work by externally triggering the relays or sensors in specific locations and mapping the responses. If it’s just a basic 2D scanner, like laying on the bed, then both visual information and things like rolling over and normal human motion can help increase the resolution of mapping. Having a scanner on the sides, giving two axis of transmission beams gives even greater fidelity, especially for your insides like sensors in your liver which are harder to know where they are inside of your body.

The encapsulating tube readers, can also enable a full body DNA scan if the right type of nanoBots are available. These are usually energised remotely and need to enter the cells. They can’t live between cells as many other sensors can.

Sometimes you just need a basic arm that goes part way over the body, like an arm chair rest. Or simply having a specialised doorframe is enough.

Obviously some nanosensors will be stuck between or inside of cells and are in somewhat static location, like by a bone, muscle or tendon. Others are in the blood stream and are moving targets. But the moving ones can be placed near known others so can be tracked.
There’s other issues, like the sensors break down and are replaced regularly by the nanobot factories. At a slow enough rate this isn’t a problem. You have a bunch of sensors in a known area and there’s some changes over time, but the positions are generally known. However the high metabolism mode that Eli enters causes a massive increase in nanobot turnover and he needs regular injections of new nanoBot sets, causing some of the mapping to become inaccurate, hence he has a basic external scanner built into his bed which works in with the video cameras in the room.

———————————————————————————————————————————————————————————

Remote Sensing / out of body experiences

Because the nanosensors can work wirelessly out of the body, you could feel information from the drops of blood nearby.

But it goes beyond that. The mapping of sensors and sensations can be done for objects out of your body. Say a door.
The door to my room feels the air conditioning on one side and the hot Vietnam heat on the other. It feels the wind, the cat walking past or occasionally trying to scratch it. The hinges know when they are dry and need oiling again. The handle knows when it’s being used and because of the force variations likely by who. The frame and door know when they are closed or open, but also if the house has shifted and the door doesn’t quite close properly anymore. But all this sensory information could be provided to you, once you’ve got a level 7 neuroMod. You could feel the door as if it is an extension of you.
You could then feel the sensations of a tree outside. Here the sensor dust network comes into play with GPS microdots could be used to calibrate the position of nearby smart dust, plus video cameras that track occasionally IR flashing dust sensors can provide high fidelity positioning.
So you could feel the warmth of the sun on the tree. The wind in the leaves. But also the ants crawling on it’s bark. The moisture in the air and the sap leaking from the wound when a bear swiped it during a fight. The tree obviously won :p haha

You could have sensors inside the tree with tell you about the root system and soil nutrients it’s uptaking. The moisture being raised through the trunk. The CO2 it’s pulling out of the atmosphere and using as a building block for creating more plant matter. How open the stomata are and how well it’s breathing.

That’s a single door or a single tree.
But you could also abstract up and ‘feel’ a whole house or even a whole forest. You would feel different information. The forest would include not just the trees but the deer and birds and ants and bugs and decomposing nature. The ecosystem.
The house would include bedrooms and toilets and electricity, Internet, power and water. With a whole neighbourhood being equivalent to the forest.

But how large can you go? Can you abstract to a whole country? To all the oceans? To a whole planet? To a collection of planets?

This is different to experiencing the life stream of a friend, be they human or animal. Those are based on the brains sensory perceptions and are neuroMod enabled streams which include conscious processing of the stimuli.

Whilst you could also have life streams of AI’s which are based on their own processing, and experiences (conscious or otherwise). Those aren’t the same as remote sensing which is about creating a new sensory system and new interpretation system. You’d need algorithms and AI to help with the mapping and data processing and making sense of the data. Turning it into sensations that we can relate to, or helping us develop new sensations we could never have imagined.

By Michael Kubler

Photographer, cinematographer, web master/coder.

Leave a comment

Your email address will not be published. Required fields are marked *