Google AutoML Prediction with a Google Cloud Storage source

As per the Gist https://gist.github.com/kublermdk/0b8c1f6173e5b121e5aee303160fa3f3

<?php

// --------------------------------------------------
//   Example Google Cloud AutoML Prediction
// --------------------------------------------------
// @author Michael Kubler
// @date 2020-10-07th
// This is a cut down gist of what you need to
// make a Google Cloud AutoML (Auto Machine Learning)
// prediction request, based off an already uploaded
// file in Google Cloud Storage (GCS).
//
// The main point is that the payload to be provided
// needs to include a Document
// the Document needs to have an DocumentInputConfig
// The DocumentInputConfig needs a GcsSource
//
// Those things took longer than they should have to
// find and work out how to use.
// The Documentation is auto-generated and hard to
// understand.
// Semi-Useful links:
// https://cloud.google.com/vision/automl/docs/predict
// https://googleapis.github.io/google-cloud-php/#/docs/google-cloud/v0.141.0/automl/v1/predictionserviceclient
// https://cloud.google.com/natural-language/automl/docs/tutorial#tutorial-vision-predict-nodejs

use Google\Cloud\AutoMl\V1\PredictionServiceClient;
use Google\Cloud\AutoMl\V1\AnnotationPayload;
use Google\Cloud\AutoMl\V1\Document;
use Google\Cloud\AutoMl\V1\DocumentInputConfig;
use Google\Cloud\AutoMl\V1\ExamplePayload;
use Google\Cloud\AutoMl\V1\GcsSource;
use yii\helpers\VarDumper;

// -- Things to change
$autoMlProject = '186655544321'; // The ProjectId - Set this to your own
$autoMlLocation = 'us-central1'; // For AutoML this is likely to be the location
$autoMlModelId = 'TEN15667778886635554442'; // The modelId - Set this to your own
$autoMlCredentialsLocation = __DIR__ . '/google-service-account.json'; // Set this to where ever you set your auth credentials file
$gsFilePath = 'gs://<bucket>/filePath.pdf'; // Obviously set this to your file location in Google Cloud Storage

// -- General setup
putenv('GOOGLE_APPLICATION_CREDENTIALS=' . $autoMlCredentialsLocation);
$autoMlPredictionServiceClient = new PredictionServiceClient();
$autoMlPredictionServiceFormattedParent = $autoMlPredictionServiceClient->modelName($autoMlProject, $autoMlLocation, $autoMlModelId);

// -- Setup the request
$pdfGsLocation = (new GcsSource())->setInputUris([$gsFilePath]);
$pdfDocumentConfig = (new DocumentInputConfig())->setGcsSource($pdfGsLocation);
$pdfDocument = (new Document())->setInputConfig($pdfDocumentConfig);
$payload = (new ExamplePayload())->setDocument($pdfDocument);

// -- Make the request (Here we actually do the prediction)
$autoMlFullResponse = $autoMlPredictionServiceClient->predict($autoMlPredictionServiceFormattedParent, $payload);

// --------------------------------------------------
//   Output #1 - All as JSON
// --------------------------------------------------
// You've got a couple of options now, you could return the full set by outputting / returning the serializeToJsonString response
echo $autoMlFullResponse->serializeToJsonString();

// --------------------------------------------------
//   Output #2 - Get just specific fields
// --------------------------------------------------
// Or for this example you might only want the payload[i].displayName and payload[i].textExtraction.textSegment.content
$payload = $autoMlFullResponse->getPayload();
$autoMlProcessedResponse = [];
foreach ($payload->getIterator() as $payloadEntry) {
    /** @var AnnotationPayload $payloadEntry */
    $autoMlProcessedResponse[$payloadEntry->getDisplayName()] = $payloadEntry->getTextExtraction()->getTextSegment()->getContent();
}
echo VarDumper::export($autoMlProcessedResponse); // PHP array format, you'd probably want to JSON encode it instead

// NB: You'll likely want to convert this to a class and provide the $gsFilePath in a method and return the expected response not output it

Reinvigorating TZM

At a meeting last night, Friday the 10th of July 2020 about 15 TZM members had a discussion on Team Speak about trying to reinvigorate the movement.
There was lots of ideas. But a couple of people’s suggestions were on approaches to working out the best option instead of just ways to make TZM great again.Aaron Frost pointed out the need for the Scientific Method and Erykah pointed out how we need to do a post-mortem style review to work out what went well and what didn’t.
Victor tried getting people to sign up to his proposal of doing face to face, street activism and only doing that. Seeing anything else as a distraction that should be shut down.There was a suggestion that we need to go back to the old, more authoritarian organisational structure. A so called “return to the good old times”.But Kees pointed out the Google trend for “Zeitgeist Movement” which is similar to the graph that was in my head, except it drops off much more extremely. It shows that there’s now only 1% or less interest in the movement compared to at the start.
Some suggestions included creating more videos and media and I know one person organising a group working on Podcasts.Personally I think what matters the most is Juuso of Koto Coop who is creating an actual RBE aspiring community which is work towards the actual transition instead of just getting the ideas out.
In terms of core members the movement definitely had a lot of people burn out and do their own thing for the last few years. We also have a habit of burning out at least one good member when they do a global Zday event. Casey, Franky and likely this year it’ll be Cliff.
Late 2018 is when it feels like the movement was at it’s most fragile and very nearly disappeared. But thanks to people like Juuso, Mark, Cliff and the Discord community we managed to keep it going. I personally credit the tenacity of Mark for keeping the global meetings going and making them easy, open and transparent.We also used the opportunity to re-organise the movement. There used to be a pyramid hierarchy of communication. Local chapters would report to the national coordinators who’d report to the Global Chapters Administration. There’s now no longer a central gate keeper group like there used to be.  It’s a lot more distributed. Partly based on my Reorganisation doc, plus some other peoples ideas.
Different people have different areas of responsibility and in most cases there’s different groups who help run different projects.
Myself I’m the main person with access to www.thezeitgeistmovement.com and as such website updates are something I prioritise. I can also post on Facebook so I’m usually involved in things like the Zday events which need updates and promotion.
There is a team responsible for moderating the Discord server, another group who deals with the main Facebook page but again others who deal with a lot of the other Facebook groups. There’s Telegram, Team Speak and more.
It seems that especially during the global Covid19 pandemic there’s been an increase in people interested in the Zeitgeist Movement.That’s not surprising given the fact we are being forced to go into a form of economic hibernation and because the existing capitalist monetary system doesn’t support that there’s some room for change. Something we’ve been wanting for a decade.
So with people interested in making TZM great again there’s a few things to consider:Firstly there’s a question of if reviving TZM is a good idea

As the Zeitgeist Movement is about a systems perspective take on transitioning to a Post-Scarcity society (e.g NL/RBE) using the Scientific Method, Sustainability, Access Abundance, Automation and Technology and the like.
Yes. I think the movement has an important place. It has the potential for far greater long term positive change than Occupy, Extinction Rebellion, Oxfam or even the Red Cross. If you’ve watched Zeitgeist Moving Forward and understand the train of thought then you’ll understand why.
Secondly is how would you go about it.My proposal would be along the lines of “We need to reinvigorate the movement in order to help transition to a post-scarcity society. To do that we need to know why more people aren’t more active in the moment, work out what what we are missing and what is the most effective activities we can do

Just from the meeting alone we have some ideas of what might draw people in, like:

  • Face to Face street activism – Although not during the current Covid19 issues
  • More online media. Videos, podcasts and the like – There’s a great Podcast team that’s being created.
  • Online community spaces – We have FB, Discord, Telegram and the like, so this is mostly taken care of. Although the website needs a bit of a content overhaul.
  • More clear messaging – There was a project by Cliff creating new explanation videos which started this but has taken a backseat whilst he organises Zday.
  • More consistency – The example given was of all chapters having the same making scheme. Personally it sounded a bit OCD and I think we need to have locally appropriate diversification.
  • A change in org structure – Note that this has already been done, just not heavily communicated.
  • A change in target audience to be less conspiracy theory based – This is an interesting one and would require most filtering measures which go against the movements ethos of anyone with a good understanding of the concepts being a member. I suspect spinoff groups could add their own filtering in better ways.
  • A practical, physical manifestation of a transition. E.g Koto co-op.

I’m sure there’s plenty more ideas.

Obviously which tasks people take on depend on their personality, skills and interest. So there’s no one size fits all approach.

It’s likely we need to do a review of historical events until now and if possible interviews with people who are no longer members and people who don’t know about the movement and try working out some experiments to see what is actually effective.

One thing PJ said about the movement is that it’s excitement based.

Being an active TZM member I’ve of course talked to many people about the movement and ideas and something I get very consistently is people saying “So you’ve been around for 10 years and what have you achieved?”

 As TZM is about promoting the ideas of the NL/RBE we explain how we’ve reached lots of people. Millions of views on the main videos. Large amounts of media content, lots of chapters, events and activism.

But people want to know what steps we’ve made towards the transition.

As per my Price of Zero transition talk. That’s where the Crossing the Chasm marketing information comes in useful to understanding why so many people are asking this question. Only a very small percentage of people are the innovators and early adopters of new concepts. Most are practical minded. They will join an RBE aspiring community to get things done they can’t under a capitalist society but they won’t go building the initial prototypes.

So based on my current understandings I think one of the most powerful things we can do is help people who already know the concepts to know we are making steps towards the transition. I’m working on a 5+25 year transition plan of my own.

But in the mean time there’s Koto Coop which is just starting, Kadagya in Peru and a handful of proposals which need resources to get off the ground.

Still, doing the experiments is important and I think we need to work on the metrics which define success.

It is about more members in some specific community (e.g Facebook, or Discord)? That’s an indicator of marketing not actual transition progress.

Is it about the Google Trend line of Interest Over Time going up? This would likely indicate more people interested in knowing about the movement. But partly the virality which helped spawned the initial interest was shaped by the cultural environment which has changed.

Is it about how many political policies are altered, or political parties voted into power? Not likely as that’s not systemic change.

Ideally we’d have the Zeitgeist Survey Project to have a better handle on the actual change in cultural zeitgeist and also some metrics for tracking systemic change. But that’s another project to work on and needs lots of help to get started.


KotoCoop: https://kotocoop.org/about/model/

Google Trends: https://trends.google.com/trends/explore?date=all&q=zeitgeist%20movement

–Michael Kubler
Email: michael@zeitgeist-info.com
FB: @kublermdk

Disable ESET Self-Defense to Shrink your Windows volume

As you can tell from the title, if your trying to Defrag your hard drive, shrink the volume or something like that you might have issues when you have the ESET security software (Anti-virus) installed. It took me a while and involved trawling online forums but I found the best option is to open up the ESET GUI, press F5 to open up the advanced options, go to HIPS, then disable the “Self-Defense” system. You’ll want to disable your Internet connection, just to be safe from any nasties. Reboot your computer, and now you should be able to shrink your drive, move / delete the ESET files or defrag your hard drive. Although I highly recommend you backup your drive first.

ESET Disable Self-Defense Animation
ESET – Disable Self-Defense Step by Step Animation

My Story

A little while ago I migrated my laptop from running on a spinning disk to installing a 1TB SSD. An M.2 Samsung 970 EVO Plus to be specific.

I finally got a new 8TB external drive for backups. I made an Acronis backup image then installed the Samsung Magician software which pointed out how I should enable Over Provisioning on my SSD. That’s where a section of the drive is made available to “Improve the performance and lifetime of the SSD“.
Basically if a small part of the drive is written to a lot then it’s likely to cause issues, so this is a way of allowing the hotspot to be moved around the physical location of the drive. At least, that’s my understanding.

I want my drive to last, so I attempted to use the Samsung Magician software, however it wouldn’t let me make any changes.

I tried via the Computer Management -> Disk Management system to Shrink the volume, however it showed I could only shrink it by 47MB.

It turns out that both trying to Shrink the volume in Disk Management and in the Samsung Magician software uses the Defrag system which tries to move files and sees how much space it can allocate. However the ESET files, for me the ones in the C:\ProgramData\ESET\ESET Security\ScanCache\1185 folder were blocked from being moved by the ESET Self-Defense system. I couldn’t delete, move, change permissions or do anything to them, even as an admin. They were also right at the end of the drive.

I found out it was ESET because the Additional Considerations part of the Windows help suggested filtering the Application Log for Event 259 after trying to see how much I could shrink the volume. This includes me using Diskpart on the cmd line and also manually running the defrag C: /H /U /V /X command. But that’s not needed.

I spent a while trying to disable everything of the ESET Internet Security software that I could. But the important stuff seemed to be locked down. I couldn’t disable the service in services.msc I couldn’t kill them in Task Manager. Being an administrator didn’t help. But I hadn’t restarted my computer.
Thankfully a post by Marcos in the ESET forum pointed out how to disable the Self-Defense system. Disabling my Internet (pressing the Airplane mode button on my laptop) and following the prompt about restarting was all it ended up taking.

I can now easily Over Provision my SSD drive.

Although I should have gone with the recommended 6% I’ll change that soon enough.

By the way. Don’t forget to re-enable Self-Defense mode in ESET and reboot again.

Advanced Filtering with MongoDB Aggregation Pipelines

This is a repost from https://medium.com/@kublermdk/advanced-filtering-with-mongodb-aggregation-pipelines-5ee7a8798746 although go read it there so I can get the Medium $$, because it reads better.


For a project Chris Were and I have been working on we discovered a great way of doing advanced queries using MongoDB’s Aggregation pipelines with the use of the $merge operator in order to filter down to a specific set of customers based on their demographics and purchasing behaviour.

For those that don’t know MongoDB’s aggregation pipeline is much more sophisticated and at least for me, also more intuitive than filtering using Map-Reduce.

The project is part of the APIs and admin control panel backend that powers a mobile app for customers who are shopping.

We know things like the customers age, gender, and state.

We save some of the customer’s transaction data, including the storeId, an array of products and an array of product categories. Although only the last few months worth.

We have an admin control panel where the managers can create a set of filters to select a specific customer group, they can then then send push notifications to those customers, assign them coupons or send them surveys.

Except for the number of days ago, the entries allow for multiple selections. e.g You can select all age ranges, just one, or a couple of them.

Although it’d take you a really long time in the UI to select almost all of the products or even the categories. Hence we have both have and haven’t purchased / shopped at versions.

Example filters:

Age: 0–20, 21–30, 31–40, 41–50, 51–60, 60+

Gender: Male, Female, Other

State: South Australia, Victoria, Queensland, New South Wales, Western Australia, ACT, NT

Have Purchased Products [X,Y, …] in the last Z days

Haven’t Purchased Products [X,Y, …] in the last Z days

Have Purchased from Product Categories [X,Y, …] in the last Z days

Haven’t Purchased from Product Categories [X,Y, …] in the last Z days

Have Shopped at Store(s) [X,Y, …] in the last Z days

Haven’t Shopped at Store(s) [X,Y, …] in the last Z days

They needed the system to be flexible so there’s also include and exclude versions of the filters.
Importantly the Include filters are AND’d together whilst the Exclude filters are OR’d together. So you can get very specific with the includes whilst applying some broad exclude filters.

An example might be selecting people who have purchased toilet paper and alcohol hand sanitiser in the last 7 days but exclude all people aged 60+ and all people who’ve purchased kitty litter. A notification can then be sent about how there’s a new pandemic preparedness set which is now in stock, or how the stores are being regularly disinfected during the Covid19 pandemic.

Another option could be to target people in South Australia who are 30 yrs old or under and have purchased from the Deli – Vegan meats product category in the last 10 days, but exclude those who’ve purchased the new Vegan burger. Then they can be given a 2 for 1 voucher for the new burger.

With many thousands of products, hundreds of categories and a few dozen stores there’s a reasonable amount to search through. We also don’t know how many customers there will be as the system hasn’t been launched yet.

But the system has to be fairly fast as we sometimes need to queue up customer push notifications whilst processing the HTTP requests from the 3rd party sending us the transaction information.

The important parts of the data structure looks like this:

What we needed after after filtering all the customers and transactions is a list of Customer IDs. We can then feed those into a variety of systems, like the one for sending push notifications, or selecting which people get coupons.

A main limitation was that whilst the database servers were quite powerful, the web servers weren’t. My initial thoughts were to process an bunch of aggregation pipelines, get a list of CustomerID’s and do the merging and processing in PHP, but when there’s potentially 100k+ customers and transactions, Chris pushed to go harder on the database. Thankfully MongoDB is powerful and flexible enough to do what we wanted.

In the latest v4.2 version of MongoDB there’s now a $merge aggregation pipeline which can output documents to a collection and has some advanced controls about what to do when matching, unlike the $out operator.

I worked out that we can do two types of queries. A “Haveselect of all those who should stay and a “Have Not” select of all those who should be removed.

For a “HAVE” Select we output the customerId’s into a merged results collection for the customer group with an extra {selected: true} field and bulk delete those without the selected field, then and bulk-update and removed the selected:true field.

For the Have Not’s we select all the customerId’s of those we don’t want, set {excluded:true} and bulk delete those with the field.

Example of Include and Exclude filters

This is an example of the UI for setting the filters. The approximate customer’s is based upon some randomly created data used for load testing… In this instance 17k customers and 340k transactions.

The UI creates a set of filters with the values of things like the productId’s, but the PHP backend, using the Yii2 framework, Mozzler base and some custom code, does some parsing of things like ages. e.g From the string “0–20” to the current unix “time()” to “time() -20 years”. Similar changes are done on the backend to convert something like 60 days ago into a unix timestamp relative to now.

I was going to do an aggregation pipeline for each filter. However if there’s 10 filters that could be a lot of work. MongoDB seems to be better at having somewhat complicated $match queries (with some good indexes) but not so good at running as many aggregations.

Chris then suggested we merge the aggregations and I realised that doing it the following way actually works out perfectly and we only need a max of 4 aggregations. I’m sure if the Exclude filters weren’t considered a logical OR or the Include’s were consider a logical AND of the filters then things would be different.

Aggregation pipelines

1. Customer Select (Have):
Include filters for a Customer’s Age, Gender and/or State

2. Customer Exclude (Have Not):
Exclude filters for a Customer’s Age, Gender and/or State

3. Transaction Select (Have):
Include filters for Have Purchased Products, Categories and/or at Stores
Exclude filters for Haven’t Purchased Products, Categories and/or at Stores

4. Transaction Exclude (Have Not)
Include
filters for Haven’t Purchased Products, Categories and/or at Stores
Exclude filters for Have Purchased Products, Categories and/or at Stores

From the admin control panel UI we’ve grouped the filters into the Include or Exclude and have an array of them.
Because things like 7 days ago needs to be converted into unix timestamp based on the current time, we need to update the aggregations dynamically, hence using MongoDB Views wasn’t really possible.

On the backend I wrote a Customer Group manager system for grouping the filters into the different categories and merging them together.
Whilst the actual queries we did were a bit more complicated than shown below because we aren’t saving the state as a string but a postcode so have a bunch of ranges for them, what we do is very similar to the example aggregation below, based on the filters in the UI screenshot:

The aggregations in the above Gist should have enough comments to explain the actual steps in detail. But it’s expected you’ve used MongoDB’s aggregations to have some idea of what’s going on.

Some points we discovered:

  • The $merge filter lets us put all the data into a merged collection and we can iterate over the results using a cursor, use the collection count and other things to make it both easy and very scalable. The merged collections are effectively free caching.
  • It was very powerful to always be doing a $match (select) query first. It’s fast with the right indexes and powerful with the excluded or selected: true and doing a bulk delete / update.
  • Merging into the Have / Have Not filtersets on the Customer and Transaction models mean there’s a maximum of 4 pipelines that will be run. Although if the Exclude filters were an AND not OR between them, then this might not be the case.
  • An edge case is that we have to add in all customerId’s as a first pass if there wasn’t a Customer Have pipeline so that customers who don’t have any transactions could still be returned, or so that if there’s no filters then it selects everyone.
  • I also developed some tweaks which let us run a set of filters on just a single customer or small selection of customers to know if they are part of a customer group. This is especially used when a new transaction comes in. We point to a different (temporary) customerGroup collection in that case. Obviously querying against a small set of customerIDs makes things much faster.

The end results are good. I get around 600ms when querying against 20k customers and 180k transactions with the basic Vagrant VM on my laptop. Although that is random data I generated just for load testing.

We are still waiting to see what this will be like in production.

Let me know if something doesn’t make sense or if you want more information.

Via Negativa

Via Negativa translates to “by removal.” Taleb argues that a lot of problems can be solved by removing things, and not by adding more. In decision making, if you have to come up with more than one reason to do something, it’s probably because you’re just trying to convince yourself to do it. Decisions that are robust to errors don’t need more than one good reason. You can observe the beneficial effects of Via Negativa effects in a vast number of fields from Medicine and Diet to Wealth.

Copied from: https://anantja.in/antifragile-things-that-gain-from-disorder/

This mental model wasn’t in the main list of mental models I often look at https://fs.blog/mental-models/ but it’s one I’ve come across before and wanted to point out to people so thought I’d post it here on it’s own.

In this case, I’m thinking about using Via Negativa for removing stupid people from a group.

Covid19 – It’s not intentional, it’s neglect

I’ve been seeing conspiracy theories pop up saying that the Covid19 Corona virus is an intentionally created virus.
But in my research the truth is that it seems to be a string of incidents going back to the 1960’s and culminating in neglect of animals.

Firstly, I know that in Australia at least government funding cuts weren’t just cutting off the ‘fat’ but also the meat from the healthcare services. Core services were being affected. So in times like that having a pandemic response team and spare stock of required PPE supplies is a luxury that can’t be afforded.
If a rather rich country like Australia has these issues, then I’d expect a large number of countries would also have the same issue. Maybe some Nordic countries would have a bit better setup, but I’ve no idea.

If the Covid19 virus RNA did have evidence of something like CRISP-R manipulation. e.g by having many copies of a particular nucleotide sequence. Then given the RNA has been transcribed and looked at by many professionals I’d expect that would be announced and I’d be more likely to consider it.

I do think that Capitalism and the Rich and Powerful are to blame somewhat. But even Communism has it’s part.

The story as I understand it is such:

Back in the 1960’s China followed the USSR’s crazy Communist ideas pitched by Trofim Lysenko around farming practices as part of the ‘Great Leap Forward‘. The ideas like having the plants spaced really close to each other weren’t tested, were very incorrect and when combined with a few other issues caused tens of millions of people to die from hunger when the farms failed.

Still somewhat struggling, in 1978 the Chinese government gave up it’s control over farming allowing companies to take over.
During this time many peasant people started hunting wildlife to survive. From bats to turtles, chimps, snakes, goats and more.
That practice then became industrialised in 1988 when the Chinese government declared the wildlife a natural resource, which allowed the farming to be expanded upon and industrialised. I over time they also started to trade with more and more exotic animals, like bears, lions, pangolins and rhinos.

These animals are sold at wet markets like the one in Wuhan China and their cages are stacked on top of each other. The poop, blood, puss and other liquids fall from the animals at the top through to those at the bottom.

It’s believed that Covid19 was passed from a bat to a pangolin then to a human.

Covid19 Bat to Pangolin to Human

This isn’t the first time such a transmission has happened from animals to humans in a wet market in China, the SARS virus also from China and in a similar incident.

But China isn’t closing the wet markets down permanently because the main customers of the wildlife farming industry and especially the illegal trades are the rich and powerful.
That’s the case because the wildlife industry obviously wants to keep itself going and so has been promoting its animals as tonic products. Good for bodybuilding, disease fighting, sex enhancing and disease fighting. Whilst none of the these claims are true, they are lapped up by the rich and powerful.
That means the industry has an enormous lobbying capability.

So if you want to blame Covid119 on both Capitalism and the rich and powerful you can. But it’s not an intentional virus. It’s the result of a complex chain of events from bad farming practices that caused a side industry which has bad animal farming practices through to the rich and powerful.
I think the Vegans and Vegetarians are right. We wouldn’t be in this mess if we weren’t eating so much meat and were looking after the animals better.

We wouldn’t be creating strains of antibiotic resistant diseases if we weren’t dosing the livestock because we force them into horrible cramped conditions. We wouldn’t a global pandemic and unprecedented full lockdown, possibly causing one of the worst economic downturns in decades, if we were more caring of the animals and environment.

So yeah, someone in Wuhan likely didn’t care about his animals, or was in a situation where he wasn’t able to look after them well and clean them properly so let the fluids of various animals mix at the bottom of the stack and now the world is paying for it. Currently (2pm on Monday the 23rd March 2020) 330k people have been infected and 14,703 have died and if we weren’t taking extreme measures likely millions more would be affected.

Covid19 Virus graph 23rd March 2020

The above won’t stop people, especially those who are rich and have savings from trying to take advantage of the situation and buying things up at bargain basement prices. Others are likely to take other forms of control, political, militarily or otherwise.

However I’ve also seen people actively helping others in a very altruistic way. It’s definitely inspiring. But unfortunately the elderly and poor are going to be the most affected by Covid19. I hope you dear reader isn’t in either group, but even if you aren’t take what precautions you can.

Main sources:

https://www.youtube.com/watch?v=TPpoJGYlW54 Vox – How wildlife trade is linked to coronavirus
https://www.worldometers.info/coronavirus/ – Latest Stats

Evernote lost my work

TLDR: Evernote doesn’t seem to backup your work whilst writing. It doesn’t save until you’ve actually exited the note. So it’s highly vulnerable to your phone / tablet dying.

The Story

This afternoon I got my Android Tablet out and started to write up my weekly review. I haven’t actually done my review for the last month, so there was a lot to write in, like how my Baby Boy is developing, issues I’ve had with sudden sciatica in my back and some of the craziness of dealing with the Corona virus reactions and lockdown.

It’s stuff I can re-write, but won’t. I certainly won’t be using Evernote to do so.

Actions

I started by duplicating an existing template I made recently, then I renamed it, sat down for what felt like 45mins and poured words onto the screen.

My tablet is a little old, it’s a Samsung Galaxy Tab S2 and the battery has a habit of dying. Which it did. A notification just appeared to say the battery was at 15% and then suddenly the screen went blank, then it rebooted. I didn’t think too much of it, figuring that my tablet was connected to the Internet and should be both saving locally and to the cloud as I was writing. It was fine if I lost the last minute of work.

But instead I lost probably thousands of words.

If I was writing my Novel I’d be livid. It’s just not acceptable that it’s not automatically backing up. It’s a mobile app, not a desktop app so I shouldn’t need to [Ctrl] + [s] save every few words as I normally do when working. Especially when I’m in the flow and just actively writing I don’t want to think about saving.

Switching to Pure Writer

I’ve been burned, so from now on I think I’ll be doing my initial writing in Pure Writer before copying over to Evernote. I still like Evernotes syncing across devices and the way it works, but I now loathe it doesn’t have an autosave whilst actually writing.

Killing Windows Night Light

O.M.F.G I finally found out why my colour grading has been so off. Windows 10 “kindly” enabled it’s Night Light mode and made everything more Red in the evening.

Night Light mode is the same as F.Lux or Twilight Mode it puts a Red colour over the top of your screen to reduce the amount of blue light, this is meant to help you go to sleep easier.

However, when doing any colour grading work it completely throws off your attempts. You need to disable it in order to do any Photoshopping, Video editing or anything to do with colour grading.

You can go to [Settings] -> [Display]
Then check that [Night light] is switched to off.

Then go to [Night light settings]

In the Night light settings ensure that the Schedule is set to Off and that the Colour temperature at night is all the way to the right and thus is white.

Unfortunately it took me way too long to realise what was going on and why. When I colour corrected a clip and it looked white to me, the Davinci Resolve colour scopes looked like they were out. I’d notice what looked like a bit of Red on the monitor, but would just tilt it until it looked fine. Which seemed fine as I had an X-Rite i1 Display Pro colour calibrator and was using a 4K monitor.

Note that there’s also a few other apps which can cause a colour tint. On my Asus Republic of Gamers laptop the Armoury Crate app includes a Featured app called GameVisual which also likes to do some colour temperature changes of its own.

Hopefully this helps others with some similar issues. Let me know of any other apps which causes problems.

Things I want to teach my children

Here’s a collection of things I’d love to teach my kids:

One of the most important things is how to be happy and successful. The secret to happiness being more than just having low expectations and being happily surprised.
Having a meaningful life where you are working towards a unified purpose is important.

Another thing is not to take on too many things. I’ve found it hard to say no, because I can see that there’s not enough skilled people in the world trying to help and that I can see the potential in many of the projects I come across.
But by making a list of 25+ things you want to do and focusing on the top 5 you can hopefully keep your focus.

Note that to know your purpose and thus how to prioritise what you want to do in life you should read Stephen R. Covey’s book, The 7 Habits of Highly Effective People which will also teach you to do things like balance production vs production capacity as well as taking different views of your life and do decent planning

vagrant plugin update

Vagrant is a program which makes it easy to start new Virtual Machines.
I’ve got a Windows machine (for the games and video editing software). But usually code websites which are run on Linux servers.

I usually have 1 or 2 VM’s running on my laptop.

After getting messages from Vagrant every time I started up a VM that a new update was available, I decided to install the latest version (v2.2.5)

It then stopped working and when running the usual vagrant up I got the following:

Vagrant failed to initialize at a very early stage:
The plugins failed to initialize correctly. This may be due to manual
modifications made within the Vagrant home directory. Vagrant can
attempt to automatically correct this issue by running:
vagrant plugin repair
If Vagrant was recently updated, this error may be due to incompatible
versions of dependencies. To fix this problem please remove and re-install
all plugins. Vagrant can attempt to do this automatically by running:
vagrant plugin expunge –reinstall
Or you may want to try updating the installed plugins to their latest
versions:
vagrant plugin update
Error message given during initialization: Unable to resolve dependency: user requested ‘vagrant-hostmanager (= 1.8.9)’

Running a vagrant plugin repair showed a new error.

Unable to resolve dependency: user requested ‘vagrant-vbguest (= 0.18.0)

Running the vagrant plugin expunge –reinstall didn’t help.

The vbguest is a reference to the VirtualBox manager I use and likely the guest plugins which allow for better 2 way communication between my host machine and the guest VMs.

There was no reference to the plugin in my Vagrant file, nor in the vagrant folder. There wasn’t any good Google results (hence why I’m writing this post).

After some playing around I found the command which fixed it:

vagrant plugin update

Running a vagrant plugin update caused it to update to v0.19.0 of the plugin and then everything worked happily.

Hopefully if others have the same issue they can quickly try a vagrant plugin update and see if that fixes their issue.