vagrant plugin update

Vagrant is a program which makes it easy to start new Virtual Machines.
I’ve got a Windows machine (for the games and video editing software). But usually code websites which are run on Linux servers.

I usually have 1 or 2 VM’s running on my laptop.

After getting messages from Vagrant every time I started up a VM that a new update was available, I decided to install the latest version (v2.2.5)

It then stopped working and when running the usual vagrant up I got the following:

Vagrant failed to initialize at a very early stage:
The plugins failed to initialize correctly. This may be due to manual
modifications made within the Vagrant home directory. Vagrant can
attempt to automatically correct this issue by running:
vagrant plugin repair
If Vagrant was recently updated, this error may be due to incompatible
versions of dependencies. To fix this problem please remove and re-install
all plugins. Vagrant can attempt to do this automatically by running:
vagrant plugin expunge –reinstall
Or you may want to try updating the installed plugins to their latest
versions:
vagrant plugin update
Error message given during initialization: Unable to resolve dependency: user requested ‘vagrant-hostmanager (= 1.8.9)’

Running a vagrant plugin repair showed a new error.

Unable to resolve dependency: user requested ‘vagrant-vbguest (= 0.18.0)

Running the vagrant plugin expunge –reinstall didn’t help.

The vbguest is a reference to the VirtualBox manager I use and likely the guest plugins which allow for better 2 way communication between my host machine and the guest VMs.

There was no reference to the plugin in my Vagrant file, nor in the vagrant folder. There wasn’t any good Google results (hence why I’m writing this post).

After some playing around I found the command which fixed it:

vagrant plugin update

Running a vagrant plugin update caused it to update to v0.19.0 of the plugin and then everything worked happily.

Hopefully if others have the same issue they can quickly try a vagrant plugin update and see if that fixes their issue.

My ~/.bash_aliases 2017

I have a base ~/.bash_aliases file which I normally use Ansible to update on various servers when needed and thought I’d share it.
This is intended for sys admins using Ubuntu.
The main aliases are :
ll – I use this ALL the time, it’s `ls -aslch` and shows the file listing.
agu – Apt get update, just refreshes the apt files from the net, doesn’t actually install anything but should be run before running any apt programs.
agg – Apt Get Upgrade this updates all the programs that need upgrading. Usually the server needs to be restarted after.
acs – Apt Cache Search. If there’s something to install like the PHP gearman extension I’ll usually use `acs php | grep gearman` to work out the name.
a2r – Apache2 reload.
a2rr – Apache2 restart, for when just reloading the config isn’t enough.
aliasd – Open up the local aliases file. Will apply the changes when you exit nano.
aliasd_base – Open up the main (base) aliases file which contains these aliases. If not using Ansible then I normally load the aliases into a new server by pasting the contents of the file in on the command line, then running aliasd_base and pasting it in again into the file.
chownWWW – Change the file and folder contents to being owned by www-data:www-data DO NOT RUN THIS IN THE ROOT DIRECTORY.
du – Directory usage. A general listing of file and directory sizes.
das – A directory size listing ordered by largest at the top. It’s not amazing but works well enough. I use `ncdu` the program (usually has to be installed with `agi ncdu`) to get better directory listing.
directoryExec – Makes the directories executable by the user and group.
logs – Shows most of the /var/log files, tails them so you can see any changes.
logsudo – Same as logs, but with sudo so you see more of the files.
gac – Git add and git commit. A nice quick way of doing a git commit. I usually do `gac -m “* Commit message here”
gitt – Shows the last 24hrs worth of git commits. Great for putting into a timesheet.
gittt – Shows how long ago the commits were, I mainly use this when trying to work out which commits are from today vs yesterday.
ssh-config – Edit the main ssh config file.
diglookup – Does a quick check of the A records, MX, TXT and other stuff for a domain, useful when someone says that there’s an issue with the site. example usage `diglookup kublermdk.com`
Note that the attempt to do a reverse lookup on the IP usually fails if there’s multiple A records for the main site, so you sometimes have to [ctrl] + [c] cancel out of that bit at the end. I’ll fix it one day :P
$ diglookup kublermdk.com
=====================
=== kublermdk.com ===
=====================
Wed Jan 18 11:36:58 ACDT 2017
--- dig kublermdk.com
139.162.46.66

--- dig www.kublermdk.com
kublermdk.com.
139.162.46.66

--- dig kublermdk.com mx
20 aspmx2.googlemail.com.
30 aspmx4.googlemail.com.
30 aspmx3.googlemail.com.
40 aspmx5.googlemail.com.
10 aspmx.l.google.com.
20 alt1.aspmx.l.google.com.

--- dig kublermdk.com txt
"i=221&m=domains-mx2-p11"
"keybase-site-verification=XREU5_ZiKnxXnNBV2L5Jcmn1tfUvL371DsulTNs7s9I"
"google-site-verification=m1fr_lDxzFtXawjhPXV56vbyOzKdw0SyTa1zCrdbArU"

--- dig mail.kublermdk.com
ghs.google.com.
ghs.l.google.com.
172.217.25.179

--- whois kublermdk.com

Whois Server Version 2.0

Domain names in the .com and .net domains can now be registered
with many different competing registrars. Go to http://www.internic.net
for detailed information.

 Domain Name: KUBLERMDK.COM
 Registrar: NETREGISTRY PTY. LTD.
 Sponsoring Registrar IANA ID: 677
 Whois Server: whois.netregistry.net
 Referral URL: http://www.netregistry.com.au
 Name Server: NS0.DNSMADEEASY.COM
 Name Server: NS1.DNSMADEEASY.COM
 Name Server: NS2.DNSMADEEASY.COM
 Name Server: NS3.DNSMADEEASY.COM
 Status: clientDeleteProhibited https://icann.org/epp#clientDeleteProhibited
 Status: clientTransferProhibited https://icann.org/epp#clientTransferProhibited
 Status: clientUpdateProhibited https://icann.org/epp#clientUpdateProhibited
 Updated Date: 29-oct-2014
 Creation Date: 06-jul-2007
 Expiration Date: 06-jul-2017

>>> Last update of whois database: Wed, 18 Jan 2017 01:06:49 GMT <<<

For more information on Whois status codes, please visit https://icann.org/epp

[...]

--- Web server's reverse IP 'nslookup 139.162.46.66'

li1459-66.members.linode.com.

Server: 10.0.2.3
Address: 10.0.2.3#53

Non-authoritative answer:
66.46.162.139.in-addr.arpa name = li1459-66.members.linode.com.

Authoritative answers can be found from:

---======---
I then have a ~/.bash_aliases_local file that has server specific changes. e.g kublermdk-logs which will show the logs specific for my site, esp if it’s something like a Symfony project where a lot of the useful log files are stored in something specific. I’d have
alias kublermdk-logs='tail -f /var/logs/apache2/kublermdk/*.log /var/www/kublermdk/www/app/logs/*.log'
Grab what you want, let me know if you’ve got any good aliases yourself.
Cheers!

Initial Ansible Install on Ubuntu

Because I have to run this on any new Ansible or Vagrant machine, here’s a note to myself to make this a little faster.

For Ubuntu Linux machines

sudo apt-get --assume-yes install nano man git python # For a new, minimal install of Ubuntu, e.g a Vagrant Box, they don't even include a ~/.bashrc file nor nano or man, this can help. Also, Ansible needs python (version 2 not 3) to run.
sudo apt-get --assume-yes install software-properties-common
sudo apt-add-repository --yes ppa:ansible/ansible
sudo apt-get --assume-yes update
sudo apt-get --assume-yes install ansible

 

Also ensure the hostname is something you want. Here’s a 1 liner I use, just set the hostname to something you want :
NEW_HOSTNAME='vagrant.servers.example.com'; echo ${NEW_HOSTNAME} > /etc/hostname; echo "127.0.0.1 ${NEW_HOSTNAME}" >> /etc/hosts; hostname ${NEW_HOSTNAME};

# Don’t forget to copy over the ~/.ssh/config and ~/.ssh/*.pem files across, although you probably have an Ansible task for that.

 

Hopefully you are hosting your Ansible playbooks and other files in a git repo, so you should be able to clone that and start using it.

Once you’ve setup your /etc/ansible/hosts file (I usually copy mine from my Git repo) you can try ssh’ing into all the servers. You’ll want to ensure you have run ssh-copy-id and logged in if you usually connect to the machine via a password, or have the vars for the pem file(s) set, especially if connecting to an AWS machine via ssh keyfile.

After that you’ll likely want to run the command below which will automatically say ‘yes’ to all the requests to add the server’s SSH key. This assumes your flavour of Linux has the ‘yes‘ command.

yes yes | ansible all -m ping

Can the REM stack be a thing

REM stack :

  • React
  • Express
  • MongoDB

 

It’s like the MEAN stack which means Mongo, Express, Angular Node. But Express is built on top of Node and is redundant.
The other stack I still use a bit of LAMP. Linux, Apache, MySQL, PHP.

If you like the idea then you can retweet this.

 

 

Actually, the more I look into it, the more GraphQL seems to be a big thing. Maybe there’s a REG stack option as well?

Synology NAS – Start with the smaller drives first

When you are setting up a Synology NAS, such as the 8 bay ( DS1815+ ) system I got, you’ll want to start with your smallest drive first and add larger ones over time. If you start with your biggest drive, you won’t be able to make use of the smaller ones.

 

The reason is best explained in the Synology knowledgebase titled What is a Synology Hybrid RAID (SHR) and there’s two images that particularly explained it to me :

Synology SHR standard raidSynology SHR smallest first

 

The first explains how a classic RAID setup wouldn’t make use of the different sized drives whilst the Synology Hybrid RAID (SHR) would. The thing that isn’t explained until right near the end of the article is that you can’t add smaller drives. Their explanation (with some highlighting from me) is :

Does an already-created SHR volume accept drives of smaller capacity?

Suppose your SHR volume is built on 1TB drives. To replace the old drives or add new ones, you will have to use drives equal or greater than 1TB. A smaller drive (e.g., 500GB) cannot be added to the existing SHR (or Classic RAID) volume. Even if this smaller drive is added, the storage of the smaller drive still cannot be used within the volume.

 

Conclusion

If you are a company setting up a Synology, buy a bunch of identical drives (same size and brand), preferably the NAS rated ones like the WD Red. You’ll be fine, just know that you can’t use smaller sized drives than what you’ve put in.

If you have a collection of different sized drives, put your smallest sized one in first and create a disk group then volume group based off that.

2016-08-28th Synology NAS disk group first, then volume group

 

Backstory

You don’t need to read this, it’s my personal story, not the main learning, but it’s here so you get a better understanding of my circumstances and if they apply to you.

Many years ago I got a Drobo 5n, a nice little 5 bay NAS. To me it’s the Mac version of a NAS, it tries to do everything for you, but when it breaks there’s little you can do, it also seems to focus on form over function when compared to the Synology. It looks nicer, but doesn’t do anywhere near as many things.

Still, I was a budding photographer, occasional film maker and nerd who accumulated terabytes of TV shows and Movies over the years. I used the Drobo as my master datasource for my photography collection. Years of events, portrait photos, wedding photos, videos of local activism events like the 2 week long Walk for Solar or a Climate Change conference I had helped organise. Basically, stuff with great significance that I didn’t want to lose.

I started off using the Drobo with some 500GB and 1TB drives and slowly purchased larger sized drives as I needed, I remember having a full set of 2TB drives, adding a couple of 3TB drives, a couple of 4TB drives and then a 6TB drive. As mentioned, this was over the course of many years and included moving and eventually something gave way. The top bay of the Drobo stopped working. It just didn’t recognise any drives inserted in it. Not new ones, not old ones, not after being cleaned up. This put a stop to me being able to increase the size of the Drobo as the way you increase the size is to pull out a hard drive and put a new, larger one in. But with the default settings it can only deal with a single drive not working and then rebuild the data striping to the newer drive. As it stands the Drobo will lose data if any more drives fail, which they’ll eventually do.

The great thing about slowly replacing the old drives was that I ended up with a large box full of various sized hard drives that I could backup the important photos and videos onto and store as an off-site backup, just in case.

Choosing a Synology was easy, I became jealous of them when a work college at NextFaze got one and showed off it’s features. I later suggested that Svelte Studios (now a part of Jamshop) get one and got to play with it when I was working there. We used it for not just as a file store for designers to edit off, but hosted websites off, I setup a cronjob to Rsync files from our servers as a backup, used it for time machine and all sorts.  It’s effectively a Linux box but built for storage and is much easier for non sys-admin people to use.

Due to a variety of reasons such as moving out and working on a startup company it took me a good 10 months to finally afford to buy the Synology seeing as I’d put it in the Important but not urgent financial category. I also managed to purchase a WD RED 6TB drive to go with it. I figured I could start with that, empty a couple of smaller drives of data onto it and put the drives in.

This is where the problems started. I didn’t know about the need to put the smaller drives in first so I moved the contents of a 3TB drive onto the Synology which only had the 6TB drive in it (so no data integrity backup), popped the 3TB into the Drobo and… Nothing. It wasn’t initialising. It showed the drive there, but it wouldn’t let me expand it. I created a disk group as none existed before, only had the volume for the 6TB drive. I waited for the disk to be initialised with a full bad sector scan, but I couldn’t add the disk group to the volume, only create a new volume based on that disk group.

I’m now in the process of emptying a 1TB drive and want to use that as the starting size for the Disk Group, add the 3TB drive in and create a volume group from it using SHR and Btrfs, then migrate the data off the 6TB volume which I’d used the quick setup wizard to create, then add the 6TB drive to the disk group and expand the volume before adding a bunch of other drives and data. The aim is to do all of this without actually losing any data. Fingers crossed a drive doesn’t die in the middle of an important operation.

 

 

I should also mention that in my investigations I came across a post about how to speed up your Synology volume expansion. It requires you to SSH into the machine and do some stuff on the command line. Something I do daily, but might be a bit too much for others. It’ll also slow down your NAS whilst it’s doing it. It’s also probably not needed as Synology have made some adjustments since 2014 when that blog post was written.

 

Note : I do not in any way work for Drobo nor Synology. This is not a paid post.

Synology DS1815+

 

Mailchimp Email Obfuscator

I was working on a project that tried to re-skin the Mailchimp email preferences center.

Unlike the Mailchimp signup form this page is a lot harder because you need to know the users information. Thankfully this is fairly easily done with some merge tags to make the correct links in the eDM (email) and the Mailchimp PHP SDK.

The time consuming bit turned out to be outputting the users email address in the same obfuscated way that Mailchimp does. It hides enough of the email address to prevent spammers and nefarious people from stealing the address, whilst ensuring that you (and it) know that you are talking about the correct email address.

The technique I used to create this function is not the most efficient. In fact it’s probably the least efficient, but was highly agile and easy for me to understand whilst writing.

It takes an email address like tech@sveltestudios.com and turns it into t***@s**********.com

The PHP function should be visible in the gist below, otherwise check it out on Github.