Disable ESET Self-Defense to Shrink your Windows volume

As you can tell from the title, if your trying to Defrag your hard drive, shrink the volume or something like that you might have issues when you have the ESET security software (Anti-virus) installed. It took me a while and involved trawling online forums but I found the best option is to open up the ESET GUI, press F5 to open up the advanced options, go to HIPS, then disable the “Self-Defense” system. You’ll want to disable your Internet connection, just to be safe from any nasties. Reboot your computer, and now you should be able to shrink your drive, move / delete the ESET files or defrag your hard drive. Although I highly recommend you backup your drive first.

ESET Disable Self-Defense Animation
ESET – Disable Self-Defense Step by Step Animation

My Story

A little while ago I migrated my laptop from running on a spinning disk to installing a 1TB SSD. An M.2 Samsung 970 EVO Plus to be specific.

I finally got a new 8TB external drive for backups. I made an Acronis backup image then installed the Samsung Magician software which pointed out how I should enable Over Provisioning on my SSD. That’s where a section of the drive is made available to “Improve the performance and lifetime of the SSD“.
Basically if a small part of the drive is written to a lot then it’s likely to cause issues, so this is a way of allowing the hotspot to be moved around the physical location of the drive. At least, that’s my understanding.

I want my drive to last, so I attempted to use the Samsung Magician software, however it wouldn’t let me make any changes.

I tried via the Computer Management -> Disk Management system to Shrink the volume, however it showed I could only shrink it by 47MB.

It turns out that both trying to Shrink the volume in Disk Management and in the Samsung Magician software uses the Defrag system which tries to move files and sees how much space it can allocate. However the ESET files, for me the ones in the C:\ProgramData\ESET\ESET Security\ScanCache\1185 folder were blocked from being moved by the ESET Self-Defense system. I couldn’t delete, move, change permissions or do anything to them, even as an admin. They were also right at the end of the drive.

I found out it was ESET because the Additional Considerations part of the Windows help suggested filtering the Application Log for Event 259 after trying to see how much I could shrink the volume. This includes me using Diskpart on the cmd line and also manually running the defrag C: /H /U /V /X command. But that’s not needed.

I spent a while trying to disable everything of the ESET Internet Security software that I could. But the important stuff seemed to be locked down. I couldn’t disable the service in services.msc I couldn’t kill them in Task Manager. Being an administrator didn’t help. But I hadn’t restarted my computer.
Thankfully a post by Marcos in the ESET forum pointed out how to disable the Self-Defense system. Disabling my Internet (pressing the Airplane mode button on my laptop) and following the prompt about restarting was all it ended up taking.

I can now easily Over Provision my SSD drive.

Although I should have gone with the recommended 6% I’ll change that soon enough.

By the way. Don’t forget to re-enable Self-Defense mode in ESET and reboot again.

Using jq to update the contents of certain JSON fields

OK, I’ll be brief.


I created a set of API docs with Apiary using Markdown format.

We needed to change over to Postman, so I used Apimatic for the conversion. Which was 99% great, except for the item descriptions it only did a single line break, not two line breaks. As Postman is reading the description as Markdown a single line break doesn’t actually create a new line.


So, I needed to replace the string \n with \n\n but the key is I only needed to do it on the description field.

Ohh and I needed to add an x-api-key to use the mock server. Even Postman’s own authorisation system didn’t seem to easily support this.

Using the incredibly useful comment by NathanNorman on this GitHub Postman issue I had a glimpse of what I could do.


So to add in the x-api-key into the  Postman headers, on my Linux VM I ran the following on the terminal:

jq ‘walk(if (type == “object” and has(“header”)) then .header |= (. + [{“key”:“x-api-key”, “value”:“{{apiKey}}”}] | unique) else . end )’ postman_api.json > postman_api_apiHeader.json


I then checked some resources, learnt about the |= update operator and gsub for replacement.

So to replace \n with \n\n in just the description fields I ended up with:

cat postman_api_apiHeader.json | jq ‘walk( if (type == “object” and has (“description”) ) then .description |= gsub( “\\n”; “\n\n”) else . end )’ > postman_api_apiHeader_description.json


If you want to see a list of the updated description fields to make sure it worked you can pipe the results to jq again.

cat postman_api_apiHeader.json | jq ‘walk( if (type == “object” and has (“description”) ) then .description |= gsub( “\\n”; “\n\n”) else . end )’ | jq ‘..|.description?’

Hopefully that helps others, or myself in the future.


Note that I downloaded the latest version of jq in order to run this. The debian distros are only using version 1.5 but I needed v1.6 for the walk function, but it’s a pretty easy download.

Some resources:

https://stedolan.github.io/jq/ Official jq site

https://stedolan.github.io/jq/manual/#walk(f) – Official docs description of the Walk function in jq.

https://remysharp.com/drafts/jq-recipes – Some jq recipes

https://github.com/postmanlabs/postman-app-support/issues/4044 – The Github issue that got me down this path

https://www.apimatic.io/transformer – The very powerful API blueprint online conversion system. Allowing me to upload a Markdown style Apiary file and download a Postman and also Swagger .json files.



My ~/.bash_aliases 2017

I have a base ~/.bash_aliases file which I normally use Ansible to update on various servers when needed and thought I’d share it.
This is intended for sys admins using Ubuntu.
The main aliases are :
ll – I use this ALL the time, it’s `ls -aslch` and shows the file listing.
agu – Apt get update, just refreshes the apt files from the net, doesn’t actually install anything but should be run before running any apt programs.
agg – Apt Get Upgrade this updates all the programs that need upgrading. Usually the server needs to be restarted after.
acs – Apt Cache Search. If there’s something to install like the PHP gearman extension I’ll usually use `acs php | grep gearman` to work out the name.
a2r – Apache2 reload.
a2rr – Apache2 restart, for when just reloading the config isn’t enough.
aliasd – Open up the local aliases file. Will apply the changes when you exit nano.
aliasd_base – Open up the main (base) aliases file which contains these aliases. If not using Ansible then I normally load the aliases into a new server by pasting the contents of the file in on the command line, then running aliasd_base and pasting it in again into the file.
chownWWW – Change the file and folder contents to being owned by www-data:www-data DO NOT RUN THIS IN THE ROOT DIRECTORY.
du – Directory usage. A general listing of file and directory sizes.
das – A directory size listing ordered by largest at the top. It’s not amazing but works well enough. I use `ncdu` the program (usually has to be installed with `agi ncdu`) to get better directory listing.
directoryExec – Makes the directories executable by the user and group.
logs – Shows most of the /var/log files, tails them so you can see any changes.
logsudo – Same as logs, but with sudo so you see more of the files.
gac – Git add and git commit. A nice quick way of doing a git commit. I usually do `gac -m “* Commit message here”
gitt – Shows the last 24hrs worth of git commits. Great for putting into a timesheet.
gittt – Shows how long ago the commits were, I mainly use this when trying to work out which commits are from today vs yesterday.
ssh-config – Edit the main ssh config file.
diglookup – Does a quick check of the A records, MX, TXT and other stuff for a domain, useful when someone says that there’s an issue with the site. example usage `diglookup kublermdk.com`
Note that the attempt to do a reverse lookup on the IP usually fails if there’s multiple A records for the main site, so you sometimes have to [ctrl] + [c] cancel out of that bit at the end. I’ll fix it one day :P
$ diglookup kublermdk.com
=== kublermdk.com ===
Wed Jan 18 11:36:58 ACDT 2017
--- dig kublermdk.com

--- dig www.kublermdk.com

--- dig kublermdk.com mx
20 aspmx2.googlemail.com.
30 aspmx4.googlemail.com.
30 aspmx3.googlemail.com.
40 aspmx5.googlemail.com.
10 aspmx.l.google.com.
20 alt1.aspmx.l.google.com.

--- dig kublermdk.com txt

--- dig mail.kublermdk.com

--- whois kublermdk.com

Whois Server Version 2.0

Domain names in the .com and .net domains can now be registered
with many different competing registrars. Go to http://www.internic.net
for detailed information.

 Sponsoring Registrar IANA ID: 677
 Whois Server: whois.netregistry.net
 Referral URL: http://www.netregistry.com.au
 Status: clientDeleteProhibited https://icann.org/epp#clientDeleteProhibited
 Status: clientTransferProhibited https://icann.org/epp#clientTransferProhibited
 Status: clientUpdateProhibited https://icann.org/epp#clientUpdateProhibited
 Updated Date: 29-oct-2014
 Creation Date: 06-jul-2007
 Expiration Date: 06-jul-2017

>>> Last update of whois database: Wed, 18 Jan 2017 01:06:49 GMT <<<

For more information on Whois status codes, please visit https://icann.org/epp


--- Web server's reverse IP 'nslookup'



Non-authoritative answer: name = li1459-66.members.linode.com.

Authoritative answers can be found from:

I then have a ~/.bash_aliases_local file that has server specific changes. e.g kublermdk-logs which will show the logs specific for my site, esp if it’s something like a Symfony project where a lot of the useful log files are stored in something specific. I’d have
alias kublermdk-logs='tail -f /var/logs/apache2/kublermdk/*.log /var/www/kublermdk/www/app/logs/*.log'
Grab what you want, let me know if you’ve got any good aliases yourself.

Initial Ansible Install on Ubuntu

Because I have to run this on any new Ansible or Vagrant machine, here’s a note to myself to make this a little faster.

For Ubuntu Linux machines

sudo apt-get --assume-yes install nano man git python # For a new, minimal install of Ubuntu, e.g a Vagrant Box, they don't even include a ~/.bashrc file nor nano or man, this can help. Also, Ansible needs python (version 2 not 3) to run.
sudo apt-get --assume-yes install software-properties-common
sudo apt-add-repository --yes ppa:ansible/ansible
sudo apt-get --assume-yes update
sudo apt-get --assume-yes install ansible


Also ensure the hostname is something you want. Here’s a 1 liner I use, just set the hostname to something you want :
NEW_HOSTNAME='vagrant.servers.example.com'; echo ${NEW_HOSTNAME} > /etc/hostname; echo " ${NEW_HOSTNAME}" >> /etc/hosts; hostname ${NEW_HOSTNAME};

# Don’t forget to copy over the ~/.ssh/config and ~/.ssh/*.pem files across, although you probably have an Ansible task for that.


Hopefully you are hosting your Ansible playbooks and other files in a git repo, so you should be able to clone that and start using it.

Once you’ve setup your /etc/ansible/hosts file (I usually copy mine from my Git repo) you can try ssh’ing into all the servers. You’ll want to ensure you have run ssh-copy-id and logged in if you usually connect to the machine via a password, or have the vars for the pem file(s) set, especially if connecting to an AWS machine via ssh keyfile.

After that you’ll likely want to run the command below which will automatically say ‘yes’ to all the requests to add the server’s SSH key. This assumes your flavour of Linux has the ‘yes‘ command.

yes yes | ansible all -m ping

Synology NAS – Start with the smaller drives first

When you are setting up a Synology NAS, such as the 8 bay ( DS1815+ ) system I got, you’ll want to start with your smallest drive first and add larger ones over time. If you start with your biggest drive, you won’t be able to make use of the smaller ones.


The reason is best explained in the Synology knowledgebase titled What is a Synology Hybrid RAID (SHR) and there’s two images that particularly explained it to me :

Synology SHR standard raidSynology SHR smallest first


The first explains how a classic RAID setup wouldn’t make use of the different sized drives whilst the Synology Hybrid RAID (SHR) would. The thing that isn’t explained until right near the end of the article is that you can’t add smaller drives. Their explanation (with some highlighting from me) is :

Does an already-created SHR volume accept drives of smaller capacity?

Suppose your SHR volume is built on 1TB drives. To replace the old drives or add new ones, you will have to use drives equal or greater than 1TB. A smaller drive (e.g., 500GB) cannot be added to the existing SHR (or Classic RAID) volume. Even if this smaller drive is added, the storage of the smaller drive still cannot be used within the volume.



If you are a company setting up a Synology, buy a bunch of identical drives (same size and brand), preferably the NAS rated ones like the WD Red. You’ll be fine, just know that you can’t use smaller sized drives than what you’ve put in.

If you have a collection of different sized drives, put your smallest sized one in first and create a disk group then volume group based off that.

2016-08-28th Synology NAS disk group first, then volume group



You don’t need to read this, it’s my personal story, not the main learning, but it’s here so you get a better understanding of my circumstances and if they apply to you.

Many years ago I got a Drobo 5n, a nice little 5 bay NAS. To me it’s the Mac version of a NAS, it tries to do everything for you, but when it breaks there’s little you can do, it also seems to focus on form over function when compared to the Synology. It looks nicer, but doesn’t do anywhere near as many things.

Still, I was a budding photographer, occasional film maker and nerd who accumulated terabytes of TV shows and Movies over the years. I used the Drobo as my master datasource for my photography collection. Years of events, portrait photos, wedding photos, videos of local activism events like the 2 week long Walk for Solar or a Climate Change conference I had helped organise. Basically, stuff with great significance that I didn’t want to lose.

I started off using the Drobo with some 500GB and 1TB drives and slowly purchased larger sized drives as I needed, I remember having a full set of 2TB drives, adding a couple of 3TB drives, a couple of 4TB drives and then a 6TB drive. As mentioned, this was over the course of many years and included moving and eventually something gave way. The top bay of the Drobo stopped working. It just didn’t recognise any drives inserted in it. Not new ones, not old ones, not after being cleaned up. This put a stop to me being able to increase the size of the Drobo as the way you increase the size is to pull out a hard drive and put a new, larger one in. But with the default settings it can only deal with a single drive not working and then rebuild the data striping to the newer drive. As it stands the Drobo will lose data if any more drives fail, which they’ll eventually do.

The great thing about slowly replacing the old drives was that I ended up with a large box full of various sized hard drives that I could backup the important photos and videos onto and store as an off-site backup, just in case.

Choosing a Synology was easy, I became jealous of them when a work college at NextFaze got one and showed off it’s features. I later suggested that Svelte Studios (now a part of Jamshop) get one and got to play with it when I was working there. We used it for not just as a file store for designers to edit off, but hosted websites off, I setup a cronjob to Rsync files from our servers as a backup, used it for time machine and all sorts.  It’s effectively a Linux box but built for storage and is much easier for non sys-admin people to use.

Due to a variety of reasons such as moving out and working on a startup company it took me a good 10 months to finally afford to buy the Synology seeing as I’d put it in the Important but not urgent financial category. I also managed to purchase a WD RED 6TB drive to go with it. I figured I could start with that, empty a couple of smaller drives of data onto it and put the drives in.

This is where the problems started. I didn’t know about the need to put the smaller drives in first so I moved the contents of a 3TB drive onto the Synology which only had the 6TB drive in it (so no data integrity backup), popped the 3TB into the Drobo and… Nothing. It wasn’t initialising. It showed the drive there, but it wouldn’t let me expand it. I created a disk group as none existed before, only had the volume for the 6TB drive. I waited for the disk to be initialised with a full bad sector scan, but I couldn’t add the disk group to the volume, only create a new volume based on that disk group.

I’m now in the process of emptying a 1TB drive and want to use that as the starting size for the Disk Group, add the 3TB drive in and create a volume group from it using SHR and Btrfs, then migrate the data off the 6TB volume which I’d used the quick setup wizard to create, then add the 6TB drive to the disk group and expand the volume before adding a bunch of other drives and data. The aim is to do all of this without actually losing any data. Fingers crossed a drive doesn’t die in the middle of an important operation.



I should also mention that in my investigations I came across a post about how to speed up your Synology volume expansion. It requires you to SSH into the machine and do some stuff on the command line. Something I do daily, but might be a bit too much for others. It’ll also slow down your NAS whilst it’s doing it. It’s also probably not needed as Synology have made some adjustments since 2014 when that blog post was written.


Note : I do not in any way work for Drobo nor Synology. This is not a paid post.

Synology DS1815+


Windows 10 Anniversary update broke Vagrant

If you are like me and use a Windows machine to do web development on, but use a Linux Vagrant virtual machine, then you likely had issues with the virtual box VM not working after the Windows 10 Anniversary update.

I found that after spending hours waiting for Windows 10 to update, my usual vagrant up then stopped working.

I tried updating both vagrant and virtualBox and restarting the machine, still no dice. Vagrant would complain, usually about the vm being in the wrong state, like aborted.


After checking online I followed a suggestion and uninstalled then reinstalled Virtualbox and w00t, it works now!

So moral of the story, updating vagrant is fine, but just updating Virtualbox isn’t enough, you have to uninstall and reinstall.


If you haven’t already run the Windows 10 anniversary update then my suggestion is to sure you make a vagrant package or at least virtualbox clone as a backup, just in case. Unless you are one of those awesome people who actually destroy their vm after each usage (maybe you use Ansible to provision it?), or use something other than virtualbox, in which case carry on.

Don’t Use Google Authenticator

Google Authenticator is used for 2 Factor authentication, but don’t use it, use Authenticator Plus (or an equivalent alternative)

Now, 2 factor auth is very important, you definitely need to set it up. The problem is that the Google Authenticator app only really works well for your Google login because Google will also give you a set of backup codes you can use.

The issue is that the Google Authenticator app doesn’t let you export the authenticator accounts, so when you start using the app for your cryptocurrency website login, or Dropbox account and then lose your phone or simply format/factory reset it you then lose access to your accounts. However, Authenticator Plus lets you backup to Dropbox, export to the SD card, and more. Which means that you can import the authenticator logins again on your new or factory reset phone. You can’t do that with the Google Authenticator app (unless they’ve recently added this functionality).

Authenticator Plus
Authenticator Plus


Note : I’m not paid for this post, or work for Authenticator Plus or Google, I’m just a disgruntled geek who had to spend time changing to a new 2 factor auth program before being able to factory reset his phone.

#Drupalgeddon If you haven’t updated your drupal site, it’s infected

Do you host a Drupal website? Did you update within 7hrs of the latest Drupal update? If not your site is likely infected. Ours were.

The main issue is an SQL Injection vulnerability in the core of Drupal that can even allow arbitrary PHP to be executed. The fiasco is being called #Drupalgeddon.

If you run a Drupal site then check this page for information on how to fix it or this, which highlights how useful git is.

At one of the companies I work with we have a number of servers and host over 100 domains. Our main production server hosts a mixture of Drupal, WordPress and custom sites on a normal LAMP stack.

#Drupalgeddon hit us yesterday, (not long after a ZDNet article saying things didn’t seem to be too bad) and before we knew it nearly all the sites on the server were infected. The infection of spam and backdoor files weren’t limited to the Drupal sites but spread to to any other sites that the Apache server had write access to. This included WordPress and custom coded sites.


Process I roughly used :

  1. Identify the most important sites. These will be ones that are the
  2. Take them down (to prevent re-infection before we can fix it), usually just disabling the apache vhost
  3. Backup the site in case there was any new user submitted files that hadn’t made their way into the backup system. E.g
    sudo cp -Rv /var/www/sitename/www /var/www/sitename/www_haxored_2014-10-30th/
  4. Check against the previous point in time snapshot and see which files had changed. Check the files and either delete the new ones that are just spam or blatant back doors, remove the infected sections of some files or if the infection was bad enough then restore from backup (usually an rsync with –delete-before ). E.g
    sudo rsync -rlpgoD -vzhc --dry-run --delete-before /backups/rsnapshot/hourly.2/production.server/var/www/sitename/www/ -e ssh production.server:/var/www/sitename/www/
  5. Run the drush upgrade command to update Drupal to the latest version. I.e
    sudo drush pm-update projects drupal-7.32
    Note : If you don’t have ssh access or Drush (the Drupal command line tool) installed. You’ll probably have to extract the latest Drupal core files over the top via FTP, or enable web access to only your IP address and try logging in.
  6. Activate the site and check everything is fine. E.g
    sudo a2ensite sitename
  7. Change all the passwords. User logins, MySQL DB passwords and more.
  8. Check the servers email queue and ensure it’s not been sending vast amounts of spam. Ours had over 6,500 in there. E.g
    sudo exim -bp

Thankfully we had rsnapshot backups of the server from only a couple of hours before the server was hit with the infection allowing me to easily use an rsync dry run to see which files had been changed, to investigate which were OK changes and which we needed to blow away. About half the sites were so infected I backed it up then just wiped it out back to our backup. Then

There was two main backdoor files or infection types that I identified and which I identified using the following :

sudo find . -type f -name '*.php' -exec grep --files-with-matches 'PCT4BA6ODSE_' "{}" \;
sudo find . -type f -name '*.php' -exec grep --files-with-matches 'Ly4qL2U=' "{}" \;

Those commands will run a find on the PHP files from the current folder and recursively upwards and if the file contains those snippets it will output the name of the file(s) so you can start trawling. Note that the hackers have MANY different techniques up their sleeves and this is by no means a guarantee that your site is infected, it’s just something that helped me.

The top one is usually in a file which was named security.php and contained code that looked like :


The code is a base64 encoded bit of code that runs


which is itself a base64 encoded gzipped bit of code that creates a bit of HTML form code that looks like it tries to steal your username and password, although I haven’t looked at it too hard and it probably does all sorts of other stuff.
I’m not a security researcher just a general sys admin, web dev and entrepreneur, so don’t have the tools or time to analyse these in much detail.

An interesting one was $sF="PCT4BA6ODSE_";$s21=strtolower($sF[4].$sF[5].$sF[9].$sF[10].$sF[6].$sF[3].$sF[11].$sF[8].$sF[10].$sF[1].$sF[7].$sF[8].$sF[10]);$s22=${strtoupper(.....
The code is explained well here and basically it allows the script to eval (run) any PHP POST requests sent to the script with a certain variable name. A great snippet of code for the haxors who want to re-infect your machine.

The interesting bit about it is that it was appended to the files with a LOT of white space between the opening <?php and the actual code. This has the effect of being off the screen when you go to look at the file in nano (and vim or anything which doesn’t have line wrapping enabled). Thankfully it’s really easy to see the content if you cat or use head on the file. Once you know what it looks like you can then grep for it as I’d done.

There was a third file, usually called wp-xmlrpc.php which contained what looked like normal version packaging information, but for Joomla not WordPress or Drupal and scrolling down a little there was more base64 encoded fun.

If you are a security researcher then I’m sure you’ve got your hands full, but if it helps I can supply a copy of the infected files. Otherwise, if you are a sys admin then good luck!


Mailchimp Email Obfuscator

I was working on a project that tried to re-skin the Mailchimp email preferences center.

Unlike the Mailchimp signup form this page is a lot harder because you need to know the users information. Thankfully this is fairly easily done with some merge tags to make the correct links in the eDM (email) and the Mailchimp PHP SDK.

The time consuming bit turned out to be outputting the users email address in the same obfuscated way that Mailchimp does. It hides enough of the email address to prevent spammers and nefarious people from stealing the address, whilst ensuring that you (and it) know that you are talking about the correct email address.

The technique I used to create this function is not the most efficient. In fact it’s probably the least efficient, but was highly agile and easy for me to understand whilst writing.

It takes an email address like tech@sveltestudios.com and turns it into t***@s**********.com

The PHP function should be visible in the gist below, otherwise check it out on Github.

Optimising Human potential as we head towards the singularity

The many of people, myself included have fallen into a trap.

We have a set of behaviours when using a computer which are mostly there to optimise the power of the computer, not of humans. We might have to start copying a load of files and leave the computer for a while as things are slower, or when trying to organise our files and folders might multitask, trying to keep track of multiple file copies, whilst downloading the latest episode of a TV series, chatting to friends on social media all whilst we are meant to be working on an assignment. Such multi-tasking of attention is not what the brain is designed for or can really do. We can usually only concentrate on one thing at a time and have to keep switching attention, but each time you switch it takes up to half a second.

We have become so used to computer restrictions like only being able to send 160 character sms’s that we have duplicated the restriction with services like Twitter which don’t need such restrictions.

Basically, we have been optimising computer potential, which was a scarcity, at the cost of human potential. But we are near if not have reached the point where that needs to change. We need to be using computers to optimise human potential or else computers will leave us in the dust before we even reach the singularity.

As computers are getting faster we can predict that they will be as computationally powerful as humans in only a couple of decades. Only 18 months after that they will be twice as fast and barely 6 years later will be 32x as powerful, assuming Moores law is the main limiting factor.