Google AutoML Prediction with a Google Cloud Storage source

As per the Gist https://gist.github.com/kublermdk/0b8c1f6173e5b121e5aee303160fa3f3

<?php

// --------------------------------------------------
//   Example Google Cloud AutoML Prediction
// --------------------------------------------------
// @author Michael Kubler
// @date 2020-10-07th
// This is a cut down gist of what you need to
// make a Google Cloud AutoML (Auto Machine Learning)
// prediction request, based off an already uploaded
// file in Google Cloud Storage (GCS).
//
// The main point is that the payload to be provided
// needs to include a Document
// the Document needs to have an DocumentInputConfig
// The DocumentInputConfig needs a GcsSource
//
// Those things took longer than they should have to
// find and work out how to use.
// The Documentation is auto-generated and hard to
// understand.
// Semi-Useful links:
// https://cloud.google.com/vision/automl/docs/predict
// https://googleapis.github.io/google-cloud-php/#/docs/google-cloud/v0.141.0/automl/v1/predictionserviceclient
// https://cloud.google.com/natural-language/automl/docs/tutorial#tutorial-vision-predict-nodejs

use Google\Cloud\AutoMl\V1\PredictionServiceClient;
use Google\Cloud\AutoMl\V1\AnnotationPayload;
use Google\Cloud\AutoMl\V1\Document;
use Google\Cloud\AutoMl\V1\DocumentInputConfig;
use Google\Cloud\AutoMl\V1\ExamplePayload;
use Google\Cloud\AutoMl\V1\GcsSource;
use yii\helpers\VarDumper;

// -- Things to change
$autoMlProject = '186655544321'; // The ProjectId - Set this to your own
$autoMlLocation = 'us-central1'; // For AutoML this is likely to be the location
$autoMlModelId = 'TEN15667778886635554442'; // The modelId - Set this to your own
$autoMlCredentialsLocation = __DIR__ . '/google-service-account.json'; // Set this to where ever you set your auth credentials file
$gsFilePath = 'gs://<bucket>/filePath.pdf'; // Obviously set this to your file location in Google Cloud Storage

// -- General setup
putenv('GOOGLE_APPLICATION_CREDENTIALS=' . $autoMlCredentialsLocation);
$autoMlPredictionServiceClient = new PredictionServiceClient();
$autoMlPredictionServiceFormattedParent = $autoMlPredictionServiceClient->modelName($autoMlProject, $autoMlLocation, $autoMlModelId);

// -- Setup the request
$pdfGsLocation = (new GcsSource())->setInputUris([$gsFilePath]);
$pdfDocumentConfig = (new DocumentInputConfig())->setGcsSource($pdfGsLocation);
$pdfDocument = (new Document())->setInputConfig($pdfDocumentConfig);
$payload = (new ExamplePayload())->setDocument($pdfDocument);

// -- Make the request (Here we actually do the prediction)
$autoMlFullResponse = $autoMlPredictionServiceClient->predict($autoMlPredictionServiceFormattedParent, $payload);

// --------------------------------------------------
//   Output #1 - All as JSON
// --------------------------------------------------
// You've got a couple of options now, you could return the full set by outputting / returning the serializeToJsonString response
echo $autoMlFullResponse->serializeToJsonString();

// --------------------------------------------------
//   Output #2 - Get just specific fields
// --------------------------------------------------
// Or for this example you might only want the payload[i].displayName and payload[i].textExtraction.textSegment.content
$payload = $autoMlFullResponse->getPayload();
$autoMlProcessedResponse = [];
foreach ($payload->getIterator() as $payloadEntry) {
    /** @var AnnotationPayload $payloadEntry */
    $autoMlProcessedResponse[$payloadEntry->getDisplayName()] = $payloadEntry->getTextExtraction()->getTextSegment()->getContent();
}
echo VarDumper::export($autoMlProcessedResponse); // PHP array format, you'd probably want to JSON encode it instead

// NB: You'll likely want to convert this to a class and provide the $gsFilePath in a method and return the expected response not output it

Reinvigorating TZM

At a meeting last night, Friday the 10th of July 2020 about 15 TZM members had a discussion on Team Speak about trying to reinvigorate the movement.
There was lots of ideas. But a couple of people’s suggestions were on approaches to working out the best option instead of just ways to make TZM great again.Aaron Frost pointed out the need for the Scientific Method and Erykah pointed out how we need to do a post-mortem style review to work out what went well and what didn’t.
Victor tried getting people to sign up to his proposal of doing face to face, street activism and only doing that. Seeing anything else as a distraction that should be shut down.There was a suggestion that we need to go back to the old, more authoritarian organisational structure. A so called “return to the good old times”.But Kees pointed out the Google trend for “Zeitgeist Movement” which is similar to the graph that was in my head, except it drops off much more extremely. It shows that there’s now only 1% or less interest in the movement compared to at the start.
Some suggestions included creating more videos and media and I know one person organising a group working on Podcasts.Personally I think what matters the most is Juuso of Koto Coop who is creating an actual RBE aspiring community which is work towards the actual transition instead of just getting the ideas out.
In terms of core members the movement definitely had a lot of people burn out and do their own thing for the last few years. We also have a habit of burning out at least one good member when they do a global Zday event. Casey, Franky and likely this year it’ll be Cliff.
Late 2018 is when it feels like the movement was at it’s most fragile and very nearly disappeared. But thanks to people like Juuso, Mark, Cliff and the Discord community we managed to keep it going. I personally credit the tenacity of Mark for keeping the global meetings going and making them easy, open and transparent.We also used the opportunity to re-organise the movement. There used to be a pyramid hierarchy of communication. Local chapters would report to the national coordinators who’d report to the Global Chapters Administration. There’s now no longer a central gate keeper group like there used to be.  It’s a lot more distributed. Partly based on my Reorganisation doc, plus some other peoples ideas.
Different people have different areas of responsibility and in most cases there’s different groups who help run different projects.
Myself I’m the main person with access to www.thezeitgeistmovement.com and as such website updates are something I prioritise. I can also post on Facebook so I’m usually involved in things like the Zday events which need updates and promotion.
There is a team responsible for moderating the Discord server, another group who deals with the main Facebook page but again others who deal with a lot of the other Facebook groups. There’s Telegram, Team Speak and more.
It seems that especially during the global Covid19 pandemic there’s been an increase in people interested in the Zeitgeist Movement.That’s not surprising given the fact we are being forced to go into a form of economic hibernation and because the existing capitalist monetary system doesn’t support that there’s some room for change. Something we’ve been wanting for a decade.
So with people interested in making TZM great again there’s a few things to consider:Firstly there’s a question of if reviving TZM is a good idea

As the Zeitgeist Movement is about a systems perspective take on transitioning to a Post-Scarcity society (e.g NL/RBE) using the Scientific Method, Sustainability, Access Abundance, Automation and Technology and the like.
Yes. I think the movement has an important place. It has the potential for far greater long term positive change than Occupy, Extinction Rebellion, Oxfam or even the Red Cross. If you’ve watched Zeitgeist Moving Forward and understand the train of thought then you’ll understand why.
Secondly is how would you go about it.My proposal would be along the lines of “We need to reinvigorate the movement in order to help transition to a post-scarcity society. To do that we need to know why more people aren’t more active in the moment, work out what what we are missing and what is the most effective activities we can do

Just from the meeting alone we have some ideas of what might draw people in, like:

  • Face to Face street activism – Although not during the current Covid19 issues
  • More online media. Videos, podcasts and the like – There’s a great Podcast team that’s being created.
  • Online community spaces – We have FB, Discord, Telegram and the like, so this is mostly taken care of. Although the website needs a bit of a content overhaul.
  • More clear messaging – There was a project by Cliff creating new explanation videos which started this but has taken a backseat whilst he organises Zday.
  • More consistency – The example given was of all chapters having the same making scheme. Personally it sounded a bit OCD and I think we need to have locally appropriate diversification.
  • A change in org structure – Note that this has already been done, just not heavily communicated.
  • A change in target audience to be less conspiracy theory based – This is an interesting one and would require most filtering measures which go against the movements ethos of anyone with a good understanding of the concepts being a member. I suspect spinoff groups could add their own filtering in better ways.
  • A practical, physical manifestation of a transition. E.g Koto co-op.

I’m sure there’s plenty more ideas.

Obviously which tasks people take on depend on their personality, skills and interest. So there’s no one size fits all approach.

It’s likely we need to do a review of historical events until now and if possible interviews with people who are no longer members and people who don’t know about the movement and try working out some experiments to see what is actually effective.

One thing PJ said about the movement is that it’s excitement based.

Being an active TZM member I’ve of course talked to many people about the movement and ideas and something I get very consistently is people saying “So you’ve been around for 10 years and what have you achieved?”

 As TZM is about promoting the ideas of the NL/RBE we explain how we’ve reached lots of people. Millions of views on the main videos. Large amounts of media content, lots of chapters, events and activism.

But people want to know what steps we’ve made towards the transition.

As per my Price of Zero transition talk. That’s where the Crossing the Chasm marketing information comes in useful to understanding why so many people are asking this question. Only a very small percentage of people are the innovators and early adopters of new concepts. Most are practical minded. They will join an RBE aspiring community to get things done they can’t under a capitalist society but they won’t go building the initial prototypes.

So based on my current understandings I think one of the most powerful things we can do is help people who already know the concepts to know we are making steps towards the transition. I’m working on a 5+25 year transition plan of my own.

But in the mean time there’s Koto Coop which is just starting, Kadagya in Peru and a handful of proposals which need resources to get off the ground.

Still, doing the experiments is important and I think we need to work on the metrics which define success.

It is about more members in some specific community (e.g Facebook, or Discord)? That’s an indicator of marketing not actual transition progress.

Is it about the Google Trend line of Interest Over Time going up? This would likely indicate more people interested in knowing about the movement. But partly the virality which helped spawned the initial interest was shaped by the cultural environment which has changed.

Is it about how many political policies are altered, or political parties voted into power? Not likely as that’s not systemic change.

Ideally we’d have the Zeitgeist Survey Project to have a better handle on the actual change in cultural zeitgeist and also some metrics for tracking systemic change. But that’s another project to work on and needs lots of help to get started.


KotoCoop: https://kotocoop.org/about/model/

Google Trends: https://trends.google.com/trends/explore?date=all&q=zeitgeist%20movement

–Michael Kubler
Email: michael@zeitgeist-info.com
FB: @kublermdk

Advanced Filtering with MongoDB Aggregation Pipelines

This is a repost from https://medium.com/@kublermdk/advanced-filtering-with-mongodb-aggregation-pipelines-5ee7a8798746 although go read it there so I can get the Medium $$, because it reads better.


For a project Chris Were and I have been working on we discovered a great way of doing advanced queries using MongoDB’s Aggregation pipelines with the use of the $merge operator in order to filter down to a specific set of customers based on their demographics and purchasing behaviour.

For those that don’t know MongoDB’s aggregation pipeline is much more sophisticated and at least for me, also more intuitive than filtering using Map-Reduce.

The project is part of the APIs and admin control panel backend that powers a mobile app for customers who are shopping.

We know things like the customers age, gender, and state.

We save some of the customer’s transaction data, including the storeId, an array of products and an array of product categories. Although only the last few months worth.

We have an admin control panel where the managers can create a set of filters to select a specific customer group, they can then then send push notifications to those customers, assign them coupons or send them surveys.

Except for the number of days ago, the entries allow for multiple selections. e.g You can select all age ranges, just one, or a couple of them.

Although it’d take you a really long time in the UI to select almost all of the products or even the categories. Hence we have both have and haven’t purchased / shopped at versions.

Example filters:

Age: 0–20, 21–30, 31–40, 41–50, 51–60, 60+

Gender: Male, Female, Other

State: South Australia, Victoria, Queensland, New South Wales, Western Australia, ACT, NT

Have Purchased Products [X,Y, …] in the last Z days

Haven’t Purchased Products [X,Y, …] in the last Z days

Have Purchased from Product Categories [X,Y, …] in the last Z days

Haven’t Purchased from Product Categories [X,Y, …] in the last Z days

Have Shopped at Store(s) [X,Y, …] in the last Z days

Haven’t Shopped at Store(s) [X,Y, …] in the last Z days

They needed the system to be flexible so there’s also include and exclude versions of the filters.
Importantly the Include filters are AND’d together whilst the Exclude filters are OR’d together. So you can get very specific with the includes whilst applying some broad exclude filters.

An example might be selecting people who have purchased toilet paper and alcohol hand sanitiser in the last 7 days but exclude all people aged 60+ and all people who’ve purchased kitty litter. A notification can then be sent about how there’s a new pandemic preparedness set which is now in stock, or how the stores are being regularly disinfected during the Covid19 pandemic.

Another option could be to target people in South Australia who are 30 yrs old or under and have purchased from the Deli – Vegan meats product category in the last 10 days, but exclude those who’ve purchased the new Vegan burger. Then they can be given a 2 for 1 voucher for the new burger.

With many thousands of products, hundreds of categories and a few dozen stores there’s a reasonable amount to search through. We also don’t know how many customers there will be as the system hasn’t been launched yet.

But the system has to be fairly fast as we sometimes need to queue up customer push notifications whilst processing the HTTP requests from the 3rd party sending us the transaction information.

The important parts of the data structure looks like this:

What we needed after after filtering all the customers and transactions is a list of Customer IDs. We can then feed those into a variety of systems, like the one for sending push notifications, or selecting which people get coupons.

A main limitation was that whilst the database servers were quite powerful, the web servers weren’t. My initial thoughts were to process an bunch of aggregation pipelines, get a list of CustomerID’s and do the merging and processing in PHP, but when there’s potentially 100k+ customers and transactions, Chris pushed to go harder on the database. Thankfully MongoDB is powerful and flexible enough to do what we wanted.

In the latest v4.2 version of MongoDB there’s now a $merge aggregation pipeline which can output documents to a collection and has some advanced controls about what to do when matching, unlike the $out operator.

I worked out that we can do two types of queries. A “Haveselect of all those who should stay and a “Have Not” select of all those who should be removed.

For a “HAVE” Select we output the customerId’s into a merged results collection for the customer group with an extra {selected: true} field and bulk delete those without the selected field, then and bulk-update and removed the selected:true field.

For the Have Not’s we select all the customerId’s of those we don’t want, set {excluded:true} and bulk delete those with the field.

Example of Include and Exclude filters

This is an example of the UI for setting the filters. The approximate customer’s is based upon some randomly created data used for load testing… In this instance 17k customers and 340k transactions.

The UI creates a set of filters with the values of things like the productId’s, but the PHP backend, using the Yii2 framework, Mozzler base and some custom code, does some parsing of things like ages. e.g From the string “0–20” to the current unix “time()” to “time() -20 years”. Similar changes are done on the backend to convert something like 60 days ago into a unix timestamp relative to now.

I was going to do an aggregation pipeline for each filter. However if there’s 10 filters that could be a lot of work. MongoDB seems to be better at having somewhat complicated $match queries (with some good indexes) but not so good at running as many aggregations.

Chris then suggested we merge the aggregations and I realised that doing it the following way actually works out perfectly and we only need a max of 4 aggregations. I’m sure if the Exclude filters weren’t considered a logical OR or the Include’s were consider a logical AND of the filters then things would be different.

Aggregation pipelines

1. Customer Select (Have):
Include filters for a Customer’s Age, Gender and/or State

2. Customer Exclude (Have Not):
Exclude filters for a Customer’s Age, Gender and/or State

3. Transaction Select (Have):
Include filters for Have Purchased Products, Categories and/or at Stores
Exclude filters for Haven’t Purchased Products, Categories and/or at Stores

4. Transaction Exclude (Have Not)
Include
filters for Haven’t Purchased Products, Categories and/or at Stores
Exclude filters for Have Purchased Products, Categories and/or at Stores

From the admin control panel UI we’ve grouped the filters into the Include or Exclude and have an array of them.
Because things like 7 days ago needs to be converted into unix timestamp based on the current time, we need to update the aggregations dynamically, hence using MongoDB Views wasn’t really possible.

On the backend I wrote a Customer Group manager system for grouping the filters into the different categories and merging them together.
Whilst the actual queries we did were a bit more complicated than shown below because we aren’t saving the state as a string but a postcode so have a bunch of ranges for them, what we do is very similar to the example aggregation below, based on the filters in the UI screenshot:

The aggregations in the above Gist should have enough comments to explain the actual steps in detail. But it’s expected you’ve used MongoDB’s aggregations to have some idea of what’s going on.

Some points we discovered:

  • The $merge filter lets us put all the data into a merged collection and we can iterate over the results using a cursor, use the collection count and other things to make it both easy and very scalable. The merged collections are effectively free caching.
  • It was very powerful to always be doing a $match (select) query first. It’s fast with the right indexes and powerful with the excluded or selected: true and doing a bulk delete / update.
  • Merging into the Have / Have Not filtersets on the Customer and Transaction models mean there’s a maximum of 4 pipelines that will be run. Although if the Exclude filters were an AND not OR between them, then this might not be the case.
  • An edge case is that we have to add in all customerId’s as a first pass if there wasn’t a Customer Have pipeline so that customers who don’t have any transactions could still be returned, or so that if there’s no filters then it selects everyone.
  • I also developed some tweaks which let us run a set of filters on just a single customer or small selection of customers to know if they are part of a customer group. This is especially used when a new transaction comes in. We point to a different (temporary) customerGroup collection in that case. Obviously querying against a small set of customerIDs makes things much faster.

The end results are good. I get around 600ms when querying against 20k customers and 180k transactions with the basic Vagrant VM on my laptop. Although that is random data I generated just for load testing.

We are still waiting to see what this will be like in production.

Let me know if something doesn’t make sense or if you want more information.

Via Negativa

Via Negativa translates to “by removal.” Taleb argues that a lot of problems can be solved by removing things, and not by adding more. In decision making, if you have to come up with more than one reason to do something, it’s probably because you’re just trying to convince yourself to do it. Decisions that are robust to errors don’t need more than one good reason. You can observe the beneficial effects of Via Negativa effects in a vast number of fields from Medicine and Diet to Wealth.

Copied from: https://anantja.in/antifragile-things-that-gain-from-disorder/

This mental model wasn’t in the main list of mental models I often look at https://fs.blog/mental-models/ but it’s one I’ve come across before and wanted to point out to people so thought I’d post it here on it’s own.

In this case, I’m thinking about using Via Negativa for removing stupid people from a group.

Evernote lost my work

TLDR: Evernote doesn’t seem to backup your work whilst writing. It doesn’t save until you’ve actually exited the note. So it’s highly vulnerable to your phone / tablet dying.

The Story

This afternoon I got my Android Tablet out and started to write up my weekly review. I haven’t actually done my review for the last month, so there was a lot to write in, like how my Baby Boy is developing, issues I’ve had with sudden sciatica in my back and some of the craziness of dealing with the Corona virus reactions and lockdown.

It’s stuff I can re-write, but won’t. I certainly won’t be using Evernote to do so.

Actions

I started by duplicating an existing template I made recently, then I renamed it, sat down for what felt like 45mins and poured words onto the screen.

My tablet is a little old, it’s a Samsung Galaxy Tab S2 and the battery has a habit of dying. Which it did. A notification just appeared to say the battery was at 15% and then suddenly the screen went blank, then it rebooted. I didn’t think too much of it, figuring that my tablet was connected to the Internet and should be both saving locally and to the cloud as I was writing. It was fine if I lost the last minute of work.

But instead I lost probably thousands of words.

If I was writing my Novel I’d be livid. It’s just not acceptable that it’s not automatically backing up. It’s a mobile app, not a desktop app so I shouldn’t need to [Ctrl] + [s] save every few words as I normally do when working. Especially when I’m in the flow and just actively writing I don’t want to think about saving.

Switching to Pure Writer

I’ve been burned, so from now on I think I’ll be doing my initial writing in Pure Writer before copying over to Evernote. I still like Evernotes syncing across devices and the way it works, but I now loathe it doesn’t have an autosave whilst actually writing.

Killing Windows Night Light

O.M.F.G I finally found out why my colour grading has been so off. Windows 10 “kindly” enabled it’s Night Light mode and made everything more Red in the evening.

Night Light mode is the same as F.Lux or Twilight Mode it puts a Red colour over the top of your screen to reduce the amount of blue light, this is meant to help you go to sleep easier.

However, when doing any colour grading work it completely throws off your attempts. You need to disable it in order to do any Photoshopping, Video editing or anything to do with colour grading.

You can go to [Settings] -> [Display]
Then check that [Night light] is switched to off.

Then go to [Night light settings]

In the Night light settings ensure that the Schedule is set to Off and that the Colour temperature at night is all the way to the right and thus is white.

Unfortunately it took me way too long to realise what was going on and why. When I colour corrected a clip and it looked white to me, the Davinci Resolve colour scopes looked like they were out. I’d notice what looked like a bit of Red on the monitor, but would just tilt it until it looked fine. Which seemed fine as I had an X-Rite i1 Display Pro colour calibrator and was using a 4K monitor.

Note that there’s also a few other apps which can cause a colour tint. On my Asus Republic of Gamers laptop the Armoury Crate app includes a Featured app called GameVisual which also likes to do some colour temperature changes of its own.

Hopefully this helps others with some similar issues. Let me know of any other apps which causes problems.

Things I want to teach my children

Here’s a collection of things I’d love to teach my kids:

One of the most important things is how to be happy and successful. The secret to happiness being more than just having low expectations and being happily surprised.
Having a meaningful life where you are working towards a unified purpose is important.

Another thing is not to take on too many things. I’ve found it hard to say no, because I can see that there’s not enough skilled people in the world trying to help and that I can see the potential in many of the projects I come across.
But by making a list of 25+ things you want to do and focusing on the top 5 you can hopefully keep your focus.

Note that to know your purpose and thus how to prioritise what you want to do in life you should read Stephen R. Covey’s book, The 7 Habits of Highly Effective People which will also teach you to do things like balance production vs production capacity as well as taking different views of your life and do decent planning

Yii2 Swiftmailer 0 Auth exception

If you get the error Message: Failed to authenticate on SMTP server with username “****” using 0 possible authenticators

Then try to remove the username and password from the configuration file.

Context

When using the Swiftmailer, a common PHP emailer
This example specifically talks about the Yii2 configuration file, but likely applies to other frameworks.

Here’s an example of the offending config

config/web.php (or console.php or a common.php file if you merge the two).

[
'components' => [
  'mailer' => [
    'class' => 'yii\swiftmailer\Mailer',
    'transport' => [
        'class' => 'Swift_SmtpTransport',
        'plugins' => [
            ['class' => 'Openbuildings\Swiftmailer\CssInlinerPlugin']
        ],
        "username" => "smtp-auth-user",
        "password" => "*****",
        "host" => 'exchange.local',
        "port" => 25,
        ]
     ]
  ]
];

The exception seen was

Message: Failed to authenticate on SMTP server with username “….” using 0 possible authenticators

This exception caused major headache.

After investigation it turned out that removing the username and password from the transport caused it to work.

It seems that the server we were on was in a corporate environment and SMTP authentication was disabled but Swiftmailer was trying to authenticate and failed.

Bonus – Enabling SMTP Logging

[
'components' => [
'mailer' => [
'class' => 'yii\swiftmailer\Mailer',
'enableSwiftMailerLogging' => true,
'transport' => [
'class' => 'Swift_SmtpTransport',
"host" => 'localhost',
"port" => 25,
],
],
'log' => [
'traceLevel' => YII_DEBUG ? 3 : 0,
'targets' => [
[
'class' => 'yii\log\FileTarget',
'levels' => ['error', 'warning'],
],
[
// Log the emails
'class' => 'yii\log\FileTarget',
'categories' => ['yii\swiftmailer\Logger::add'],
'logFile' => '@app/runtime/logs/email.log',

],
],
]
];

With the above config you should now see detailed logs in the runtime/logs/email.log file.

Sabby Love

I’m a bit of a night owl but she takes it to a whole new level, sleeping most of the day.
So when she started staying over we got very little sleep. I was exhausted. But we are both in a good rhythm now.
I love the way we’ll seek each other and curl up next to each other. She’ll fall asleep in my arms. Other times she’ll caress my feet. Sometimes she’ll bite and scratch a bit when she’s feeling that way inclined. It’s not my thing, but to each their own.
I met her some time ago but we met again through the same mutual friend who looks after her Mum and we’ve been together for some months now. She’s black, which is new for me, although she has some white hairs that really stand out and I occasionally pluck out.
I love her.
Her eating habits leave a lot to be desired. Like many Vietnamese, they don’t put their litter in the bin but on the floor so I often have to sweep up afterwards.
We’ve already been through a lot. When she first arrived she was a scaredy cat, especially afraid of the rain and thunder. The roof here does make the rain extra loud 🔊  but now she is fine and can sleep through it.
It’s hard to have any privacy with her around. She’ll often come into the bathroom whilst I’m sitting on the toilet. Yet she keeps away when I shower.

We’ve watched movies together and fought off the flies, moths and bugs that often try to attack in night. She loves it when I sweep up. Did didn’t always like me on the computer too long, but now she enjoys it because she’s learnt how to be with me.

I love watching her play and enjoy life. She’s so cute.
Although she’s also invasive. She’ll check everything and go through my stuff given half a chance and sometimes destroys things. But she’s curious, not malicious.
But. I recently learnt that she’s not a she.
It turns out that Sabby, my cat, is a boy. Both Mrs Loan and myself misinterpreted Sabby’s gender. Apparently that’s pretty easy to do when the cat is young.
Sabby is a mostly black Bombay Cat whom I love dearly.
I was going to post this some time ago but wanted to make a nice collage of images. I ran out of time then, but in 15 minutes time Sabby is going away, back to his home.
See, I’ve been living in Vietnam for nearly 7 months now, but it’s time to return to Australia and then, well I’m not sure where I’m going next but I know it’s to be with my girlfriend Jen.

Using jq to update the contents of certain JSON fields

OK, I’ll be brief.

 

I created a set of API docs with Apiary using Markdown format.

We needed to change over to Postman, so I used Apimatic for the conversion. Which was 99% great, except for the item descriptions it only did a single line break, not two line breaks. As Postman is reading the description as Markdown a single line break doesn’t actually create a new line.

 

So, I needed to replace the string \n with \n\n but the key is I only needed to do it on the description field.

Ohh and I needed to add an x-api-key to use the mock server. Even Postman’s own authorisation system didn’t seem to easily support this.

Using the incredibly useful comment by NathanNorman on this GitHub Postman issue I had a glimpse of what I could do.

 

So to add in the x-api-key into the  Postman headers, on my Linux VM I ran the following on the terminal:

jq ‘walk(if (type == “object” and has(“header”)) then .header |= (. + [{“key”:“x-api-key”, “value”:“{{apiKey}}”}] | unique) else . end )’ postman_api.json > postman_api_apiHeader.json

 

I then checked some resources, learnt about the |= update operator and gsub for replacement.

So to replace \n with \n\n in just the description fields I ended up with:

cat postman_api_apiHeader.json | jq ‘walk( if (type == “object” and has (“description”) ) then .description |= gsub( “\\n”; “\n\n”) else . end )’ > postman_api_apiHeader_description.json

 

If you want to see a list of the updated description fields to make sure it worked you can pipe the results to jq again.

cat postman_api_apiHeader.json | jq ‘walk( if (type == “object” and has (“description”) ) then .description |= gsub( “\\n”; “\n\n”) else . end )’ | jq ‘..|.description?’

Hopefully that helps others, or myself in the future.

 

Note that I downloaded the latest version of jq in order to run this. The debian distros are only using version 1.5 but I needed v1.6 for the walk function, but it’s a pretty easy download.

Some resources:

https://stedolan.github.io/jq/ Official jq site

https://stedolan.github.io/jq/manual/#walk(f) – Official docs description of the Walk function in jq.

https://remysharp.com/drafts/jq-recipes – Some jq recipes

https://github.com/postmanlabs/postman-app-support/issues/4044 – The Github issue that got me down this path

https://www.apimatic.io/transformer – The very powerful API blueprint online conversion system. Allowing me to upload a Markdown style Apiary file and download a Postman and also Swagger .json files.