Took www.genesis-mining.com for a ride

Ok so the reality is https://www.genesis-mining.com. took me for a ride, a ride I willingly stepped onto but crazy! To be sure this is not a typical story and your results maybe very very different. As I can assure my other contract was a total loss.

First what is genesis-mining you ask? Well its a cloud bit miner, meaning they setup crypto mining hardware, mine coins, sell you a shares in those miners, and then pay you out the coins mined. Sound like a interesting deal, well its all speculation but here is out it worked out on one of my test.

From https://www.genesis-mining.com/pricing I chose the Dash Gemini package mainly because bitcoin was so high there was zero chance of making anything. In May of 2016 this package cost me .06 BTC ~27.6 USD, given the value of bitcoin was around $460 on the date of my purchase. So its been 9 months of a 24 month contract and due to a 10x increase in Dash I have recovered $26.2446565848 USD or 0.28400789 DASH at todays value of $92.40819537 USD/Dash. While this seems like a good thing given I have 15 months left on the contract, lets look at another investment I made in Dash in Dec.

On Dec 04, 2016 I invested $9.92 ($10 – exchange fees ) 0.01302497BTC for 1.13682797 DASH that $10 purchase is today valued at $105.0522211500.

While the Genisis contract looks like it may payout well, the reality is if I had taken that same .06 BTC (27.6 USD) and simply bought Dash coins it would be a net gain of nearly $270 USD vs the $0 USD gain I have with Genesis. Sure there is 15 months left on my contract but again even if it maintained the trend of 0.28400789 DASH every 9 moths I and Dash valuation stayed flat I would end up with a net gain of maybe $41 USD vs the $270 USD for just buying the coin. Pretty simple to see where I should place my money in the future. Even with Coinbases massive fees you still end up way ahead by buying crypto currency vs using Genisis.

I knew when I went into the the deal I was likely going to loose, but I was learning and am luck enough that loosing $30 bucks will not set me back. What I do worry about is all those suckers they are preying on, who think they are getting a good value. I would go as far as saying they are scam, they are a legitimate business model and you “could” make money if you use them. All I am saying is you could make a lot more just buying coins directly and you never need to recover your upfront fees. That said if you looking for a business model where you always win … cloud mining might not be a bad company to start.

Docker for that old software you can not update!

So I host a few websites for friends and family and as always you have one person who runs old software that they just do not want to update. The software in question this time is Gallery http://galleryproject.org a php photo gallery that stopped producing updates 2014-06-20. Every time I updated by Ubuntu server I worried about how much trouble it was going to be to make sure it still worked. The fix run the damn thing in docker! Starting with a backup of the gallery web directory from my old server I put it into /home/websites/gallery_example-com/gallery/html.

Ok first decided that the old php version in Centos 5 would be a good fit and decided to start from there and to make it easier I decided to look at the CentOS-Dockerfiles from

https://github.com/CentOS/CentOS-Dockerfiles/blob/master/httpd/centos6/Dockerfile

Knowing I wanted to run CentOS 5 I decided to rebuild the Docker image using this updated Dockerfile in my project directory ~/Development/gallery_docker/Dockerfile

FROM centos:5.11
MAINTAINER Joshua Miller
LABEL Vendor=”CentOS” \
License=GPLv2 \
Version=2.4.6-40

RUN yum -y update && \
yum -y install httpd && \
yum clean all

EXPOSE 80

# Simple startup script to avoid some issues observed with container restart
ADD run-httpd.sh /run-httpd.sh
RUN chmod -v +x /run-httpd.sh

CMD [“/run-httpd.sh”]

I also need the file run-httpd.sh that I put into ~/Development/gallery_docker/run-httpd.sh

#!/bin/bash

# Make sure we’re not confused by old, incompletely-shutdown httpd
# context after restarting the container. httpd won’t start correctly
# if it thinks it is already running.
rm -rf /run/httpd/* /tmp/httpd*

exec /usr/sbin/apachectl -DFOREGROUND

Now I had to build the the docker image in ~/Development/gallery_docker/ with the tag jassinpain/gallery_example:1 so we can version this sucker.

docker build . -t jassinpain/gallery_example:1

And run the image, exposing container port 80 via host 8080 for now as we do not have our haproxy configured yet at this point.

docker run -d –name gallery_example -p 8080:80 jassinpain/gallery_example:1

At this point I was able to browse to http://server_ip:8080/ and see the sample webpage was up and running

Next I decided to clean up, then restart with a bind to my gallery directory to the Apache DocRoot.

docker stop gallery_example
docker rm gallery_example
docker run -d –name gallery_example -p 8080:80 -v /home/websites/gallery_example-com/gallery/html:/var/www/html jassinpain/gallery_example:1

I was luck there was a simple README.html in my /home/websites/gallery_example-com/gallery/html directory so once the image was up I was able to test Apache was still working by browsing to http://server_ip:8080/README.html

From here I needed to do some debugging so I needed to drop into the container console with the following command.

docker exec -i -t gallery_buckhornskinners /bin/bash

I quickly realized I forgot to add php, oops I decided to make quick fix by installing php and some supporting software that I know gallery want and exited the image shell.

yum -y install php53 php53-cli php53-common php53-mysql ImageMagick gd
exit

I needed to restart Apache and since this image does not have a init system the quickest way was to just restart the Docker container.

docker restart gallery_example

At this point I was able to browse to http://server_ip:8080/ and presented with the config page of gallery. As I stepped through I realized that the mysql server details needed to be updated. The wonderful thing about using the bind volume meant I could just use vi to update the /home/websites/gallery_example-com/gallery/html/config.php and update the hostname which is really the DB server.

$storeConfig[‘hostname’] = ‘172.17.0.1’;

From here I was able reload http://server_ip:8080/ and was good to go!

The final steps was to make a clean Docker image by updating the Dockerfile to add the php packages.

FROM centos:5.11
MAINTAINER Joshua Miller
LABEL Vendor=”CentOS” \
License=GPLv2 \
Version=2.4.6-40

RUN yum -y update && \
yum -y install httpd php53 php53-cli php53-common php53-mysql ImageMagick gd && \
yum clean all

EXPOSE 80

# Simple startup script to avoid some issues observed with container restart
ADD run-httpd.sh /run-httpd.sh
RUN chmod -v +x /run-httpd.sh

CMD [“/run-httpd.sh”]

And then rebuild the image:

docker build . -t jassinpain/gallery_example:2

Stop the old image, clean it up, and start with the new image

docker stop gallery_example
docker rm gallery_example
docker run -d –name gallery_example -p 8080:80 -v /home/websites/gallery_example-com/gallery/html:/var/www/html jassinpain/gallery_example:2

And then verify by browsing to http://server_ip:8080/

I then put it behind a haproxy image to allow me to add more projects like may flask-sample, but thats another post!

Its been a while

Was applying for jobs a while back trying to step back into IC roles but keep hitting the problem with my resume having to much manager experience. I keep tying to figure out how to get past that problem so until then I just keep working at the current job. Pushing ahead learning what I can until I can get that resume back to a good place and move to Austin. Fun stuff ahead!

Looking forward

So after looking back its time to look forward. One of my skills is team building, I have bee successful at bring people into companies and making them a cohesive unit. I break a few rules that a lot of managers would frown at treating my employees more like family then employees. While some may think of that as bad, look at this way how do you treat your kids? Do you treat them like family? Can they still know that your the one in charge and that you make the decisions when they need to be made? Does that mean you never have conflict? I would not say I treat the employees like my children but I do use a lot of the same tool sets. Like its easier to get some to do what you need when they feel valued and are completed on a job well done. We all want to do better work when good work is acknowledged.
Another big thing I do is go out with the team to coffee A LOT! This was something I found enjoyable as a team member back at Tagged. Every day a few of us guys would go out for coffee, walk over, sit down, talk about work or out side life. There was not set agenda it was team mates being friends. When I was finally able to bring in my first employee I was luck enough to have it be one of those guys. We continued the daily coffee run and include each new employee as we added them. On top of that we went out for lunches a few times a quarter, once again no set agenda just enjoying a meal with friends. When we added Jr team members I would pay a lot more often, and on busy days we would not make it, but as a rule every one came. If you where not a coffee drinker then you can have tea, water, jamba juice, or just enjoy the walk.

Looking back

I left after 5 years feeling like I had done a poor job. The reason, I always am working to do better and thats a strength weakness.

Over the last six months a lot has changed in my life, starting most importantly with the addition of my fourth child. Given that my personal life is not the focus of this blog thats the last I will mention of it, but had to acknowledge it.
In addition to that I changed jobs. I moved from a Rdio where I had been for five years to a online ticketing company. Stepping down from a VP position where I had full access to everything to a Director with very little view of anything above me. Both companies are about the same age but its crazy how different they are.
The new ticketing company has a history of changing people running the computers and networks, sometime so fast that tasks where left mid stream. For those of you who have not walked into a situation like that its very interesting. Unlike when I joined Rdio, I would not be starting from scratch but rather making changes in place. At Rdio, I walked in with a rack of servers and a core router that had been sitting for months but not active. Within a few weeks I had the network reconfigured, the build system up, and the rack of systems running and Chefed. Was everything perfect, hell no but it was a solid foundation. From there I built the TechOps team and we continued that solid base.
Being the originator of a environment you can see every hole, everything that is wrong. I would often look at what we had and felt like a failure. Then I would bring in candidates and they would be really impressed. We had focused on a solid core, a very tradition and structured one but solid. We had very clear network paths from X to Y, systems where well defined and single purpose. After five years I started to loose the view of how well we had actually designed the system, so I left feeling like I had done a poor job. The reason, I always am working to do better and do not settle. Strength or weakness?

Be that kind of person

I want everyone I deal with have a chance of walking away with my knowledge.

For some reason a lot of my personal focus for the last couple of years has been culture and treating people like you want to be treated. While this is great for people I admit its also driven by being selfish. I want everyone I deal with have a chance of walking away with my knowledge. This means if you ask and its at all within my power I will sit down and talk to you about anything. Often this is stuff I know really well but at times is about simple observations. Why do I do this? Simply because its what I would want others to do for me. I am not talking about total time wasters but rather if you have a goal or are trying understand I make it a priority to be supportive when possible.
I came to this conclusion when dealing with my eldest son. Often he would come to me with a statement and we would sit down and talk it out. Usually I would try to not give him the answer but walk him through the issue and try to figure out how to find the answer. In the end he not only has learned what what he was expecting to but also how to think. This same process work with co-workers.
When helping another employee I want him to walk away with not only the knowledge of how to fix a error but how to identify the issue. This includes the steps I go through the thought process in my head and often I end up learning. So why technically a fluf piece I am writing this to encourage you, next time someone ask you why how or any other question look for the opportunity to help. Notice I use the word help not teach, because the are not always the same thing.

Chef & Strainer and how my chef-repo was lacking.

Just to start I love Seths work, I know he gets pissed when people comment about his work this is not a comment on his work its a, I hope your google search comes up better then mine.

OK that said I spent a night learning how to get strainer running with travis-ci and read the instruction on the https://github.com/customink/strainer page and was like OK this should take no time at all. Turns out I was wrong. Looking back this should have taken me less then two minutes to figure out but let me be honest I blew more then a hour on this one.

For my testing I created a simple travis-chef-repo directory, dropped in a cookbook directory with the openssh cookbook and its dependency of apt and iptables. Followed the directions on https://github.com/customink/strainer to create a .travis.yml and a Strainer file. Commit it, and your off to the races. Not so quick.

First you need to make sure your Gemfile is correct in travis-chef-repo, here is my example:

jmiller11:travis-chef-repo jmiller$ cat Gemfile

source “https://rubygems.org”

gem ‘rake’
gem ‘chef’
gem ‘foodcritic’
gem ‘rspec’
gem ‘strainer’

Then create the Strainerfile in

jmiller11:travis-chef-repo jmiller$ cat Strainerfile

#
Strainerfile
knife test: bundle exec knife cookbook test $COOKBOOK
foodcritic: bundle exec foodcritic -f any $SANDBOX/$COOKBOOK

Then lets test it!

jmiller11:travistest-chef-repo jmiller$ bundle exec strainer test openssh
I could not detect if you were a chef-repo or a cookbook!
Strainer marked build OK

What the heck, why not? Maybe I have that command wrong lets try this one and maybe it will auto detect the cookbook.

jmiller11:travistest-chef-repo jmiller$ bundle exec strainer test
I could not detect if you were a chef-repo or a cookbook!
Strainer marked build OK

Or how about if we give it the path that must be it.

jmiller11:travistest-chef-repo jmiller$ bundle exec strainer test –cookbooks-path=./cookbooks/
I could not detect if you were a chef-repo or a cookbook!
Strainer marked build OK

Really, how do you tell if this is repo? Lets go look at the code:

https://github.com/customink/strainer/blob/master/lib/strainer/sandbox.rb#L54

else
Strainer.ui.warn “I could not detect if you were a chef-repo or a cookbook!”
@cookbooks = []
end

Umm yea that helps, let look at how we entered this if block:

https://github.com/customink/strainer/blob/master/lib/strainer/sandbox.rb#L54

if chef_repo?

OK so lets look at the chef_repo method:

https://github.com/customink/strainer/blob/master/lib/strainer/sandbox.rb#L244-L249

# Determines if the current project is a chef repo
#
# @return [Boolean]
# true if the current project is a chef repo, false otherwise
def chef_repo?
@_chef_repo ||= begin
chef_folders = %w(.chef certificates config cookbooks data_bags environments roles)
(root_folders & chef_folders).size > 2
end

What really, you need the following directories or are not a chef repo (.chef certificates config cookbooks data_bags environments roles)? At least its easy to fix.

mkdir .chef certificates config cookbooks data_bags environments roles

Lets test this sucker!

jmiller11:travistest-chef-repo jmiller$ bundle exec strainer test openssh
# Straining ‘openssh (v1.3.5)’
knife test | bundle exec knife cookbook test openssh
knife test | checking openssh
knife test | Running syntax check on openssh
knife test | Validating ruby files
knife test | Validating templates
knife test | SUCCESS!
foodcritic | bundle exec foodcritic -f any /Users/jmiller/Development/travistest-chef-repo/cookbooks/openssh
foodcritic | FC007: Ensure recipe dependencies are reflected in cookbook metadata: /Users/jmiller/Development/travistest-chef-repo/cookbooks/openssh/recipes/iptables.rb:20
foodcritic | Terminated with a non-zero exit status. Strainer assumes this is a failure.
foodcritic | FAILURE!
Strainer marked build as failure
jmiller11:travistest-chef-repo jmiller$

I win, lets add, commit, and push. Same “I could not detect if you were a chef-repo or a cookbook!” from travis … really whats going on here. Since I am not a git expert I realize git is not adding empty directories so lets add a file if one does not exist:

for i in certificates config .chef cookbooks data_bags environments roles; do touch $i/README.md;done

git add, commit, push and wait the 8 minutes for the gem install and success. I now see the same foodcritic errors I saw on the command line and have a failing build. Syntax check and lint tools running, time to move on.

Chef AWS

OK so you have your AWS account setup, your ready to launch a EC2 instance with Hosted Chef what do you need to know? I know this may seem simple to someone who has been doing it for a long time but it took me a few hours to figure what exactly I needed. Here is a few quick notes on what I found worked work me on my Macbook Air although it should work just fine on Linux also. I assume you have a working chef install and can upload cookbooks to your Hosted Chef server. Secondly I am using the chef-dk and had to install the knife-ec2 plugin. Assuming you have setup the chef-dk as they recommend all you should need to do is run the chef gem install command. This will install the ruby gems into your home directory at ~/.chefdk so you will not need sudo access.

jmiller11:fcs2-chef-repo jmiller$ ls -l ~/.chefdk/
total 0
drwxr-xr-x 3 jmiller staff 102 May 3 13:35 gem
jmiller11:fcs2-chef-repo jmiller$

Here is the command to install the plugin:

chef gem isntall knife-ec2

Append the following to your .chef/knife.rb

# AWS support
knife[:aws_access_key_id] = ENV[‘AWS_ACCESS_KEY_ID’]
knife[:aws_secret_access_key] = ENV[‘AWS_SECRET_ACCESS_KEY’]
# Optional if you’re using Amazon’s STS
#knife[:aws_session_token] = ENV[‘AWS_SESSION_TOKEN’]
knife[:aws_ssh_key_id] = ENV[‘AWS_MYPEM’]
knife[:region] = ENV[‘AWS_REGION’]
knife[:bootstrap_version]= ‘11.12.4-1’

Append the following to your ~/.bash_profile

AWS_ACCESS_KEY_ID=XXXXXXXXXXX
AWS_SECRET_ACCESS_KEY=XXXXXXXXXXX
# note the AWS_MYPEM does not have .pem extension listed
# it found my key that was in ~/.ssh/ and is chmod 600
AWS_MYPEM=XXXXXXXX
AWS_REGION=us-east-1
# Optional if you’re using Amazon’s STS
#AWS_SESSION_TOKEN=””
export AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_SESSION_TOKEN AWS_MYPEM AWS_REGION

Source your bash profile to make sure the new variables are active:

jmiller11:fcs2-chef-repo jmiller$ . ~/.bash_profile

Ok lets test that your setup correctly by running “knife ec2 server list”, likely this will be empty for your but as long as it returns the header your fine:

jmiller11:fcs2-chef-repo jmiller$ knife ec2 server list
Instance ID Name Public IP Private IP Flavor Image SSH Key Security Groups IAM Profile State
i-b9ea45e9 base1204-1 m1.small ami-0145d268 aws-jmiller default terminated
i-88e34cd8 m1.small ami-0145d268 aws-jmiller default terminated
i-25e84775 base1204-1 m1.small ami-0145d268 aws-jmiller default terminated
i-c7e44b97 base1204-1 m1.small ami-0145d268 aws-jmiller default terminated
i-41e14e11 base1204-1 54.227.113.203 10.236.185.159 m1.small ami-0145d268 aws-jmiller www, default running
i-7dfc532d base1204-2 54.237.5.212 10.151.112.113 m1.small ami-0145d268 aws-jmiller www, default running
i-53b57800 webserver1 t1.micro ami-3202f25b me default terminated
jmiller11:fcs2-chef-repo jmiller$

Launch Command:

Ok here I am using a simple role called “base” that was uploaded to my chef hosted account all it does at this point is setup chef to run as a cron job to save memory. The ami is a ubuntu 12.04 that will be running on a m1.small instance, with the “default” and “www” security groups, with a easy to read name of “base1204-1” using the ssh key file for my aws key. I need to figure out if there is a better way then defining he ssh key on the line but this works for now.

jmiller11:fcs2-chef-repo jmiller$ knife ec2 server create -r ‘role[BASE]’ -I ami-0145d268 -f m1.small -x ubuntu -G default -N base1204-1 -i ~/.ssh/aws-jmiller

The output of the command will run and you should see something like this:

jmiller11:fcs2-chef-repo jmiller$ knife ec2 server create -r ‘role[BASE]’ -I ami-0145d268 -f m1.small -x ubuntu -G default,www -N base1204-2 -i ~/.ssh/aws-jmiller
Instance ID: i-7dfc532d
Flavor: m1.small
Image: ami-0145d268
Region: us-east-1
Availability Zone: us-east-1a
Security Groups: default, www
Tags: Name: base1204-2
SSH Key: aws-jmiller

Waiting for instance…………………
Public DNS Name: ec2-54-237-5-212.compute-1.amazonaws.com
Public IP Address: 54.237.5.212
Private DNS Name: ip-10-151-112-113.ec2.internal
Private IP Address: 10.151.112.113

Waiting for sshd….done
Connecting to ec2-54-237-5-212.compute-1.amazonaws.com
ec2-54-237-5-212.compute-1.amazonaws.com Installing Chef Client…

ec2-54-237-5-212.compute-1.amazonaws.com Chef Client finished, 7/12 resources updated in 14.133802702 seconds

Instance ID: i-7dfc532d
Flavor: m1.small
Image: ami-0145d268
Region: us-east-1
Availability Zone: us-east-1a
Security Groups: default, www
Security Group Ids: default
Tags: Name: base1204-2
SSH Key: aws-jmiller
Root Device Type: ebs
Root Volume ID: vol-cc24df85
Root Device Name: /dev/sda1
Root Device Delete on Terminate: true
Public DNS Name: ec2-54-237-5-212.compute-1.amazonaws.com
Public IP Address: 54.237.5.212
Private DNS Name: ip-10-151-112-113.ec2.internal
Private IP Address: 10.151.112.113
Environment: _default
Run List: role[BASE]

Ok thats the basics, if you get this far you might want to checkout chef-metal

How ‘DevOps’ Is Killing The Operations Engineer

Before you start to complain, I am a fan of collaboration but Devops might just be the best joke ever!

Before you start to complain, I am a fan of collaboration but Devops might just be the best joke ever! The truth is it means something different to every person. For years I have defined Devops as Engineers trying get Ops out of the way and pushing forward with out those pesky sys admins. Your think I am over blowing it? I have been in the Silicon Valley for the boom of Devops and I hear it all the time “We dont need ops, we can just have a developer do it”. The number of new startups who use AWS thus allowing them to forgo a system administrator never ceases to amaze me. My biggest problem with this is your cutting the legs out from yourself, but your assuring me job security so maybe I should keep my mouth shut.
I have been a a operations engineer for over ten years now, and honestly developers and ops engineers have different ways of functioning. To me a good software engineer has long term focus, can get deep into a project and crunch on the same code for extended durations. Give a good coder a project that will take weeks or even months and they will put there head down and solve your problem. As a generalization these people do not handle interrupt driven work well, they also often do not handle high pressure situations well.
Operations people on the other hand do the majority of their work under massive interruption and constant pressure. Tell a operations engineer the site is down and they will not focus on what the origin of the problem is, they will focus on getting the product back online and come back to fully understand why. This does not mean they do not troubleshoot but they are trying to identify the immediate cause not the who or root. One might argue this is short sited but when your stuck waiting for someone to figure out why the web severs where started your killing your customer experience. I would argue restart the web pool get the product back online and then start to look at root cause once you have identified the customer impact problem and completed the shortest path solution.
When you start off by having your engineers run operations you never allow new ops people to start from ground up and develop their skills, learning the pain points as the system grows thus ensuring when you grow to the point that you need a operations engineer the is a shortage of trained people available. One might argue that some of the developers that started the company by running operations will become your operations engineers and will cover this but to me thats like using a vice grips to remove a bolt.