Wikipedia

Search results

30 August 2015

Step by step breakdown of IBM Bluemix image deploy

Step by step breakdown of IBM Bluemix


1) Creating local image


$ ice --local build -t $IMAGE_NAME:$TAG $DIR

The first step of working with Bluemix is to create a local image. This step is analogous to creating an image in Docker. The main difference is that the build produced here can be pushed to the IBM Bluemix cloud repository. Bluemix can connect to Dockerhub, however, it can also host your images privately out of the box, and sync with your local Docker daemon images.

SUPPORT

2) Tag and Push


$ ice --local tag $IMAGE_NAME $REGISTRY_URL/$NAMESPACE/$IMAGE_NAME:$TAG

$ ice --local push$REGISTRY_URL/$NAMESPACE/$IMAGE_NAME:$TAG

The local image is now tagged, and pushed to the IBM Bluemix cloud repository for use to add to a container in the following step.

SUPPORT

3) Create and run container


$ ice run --name $UNIQUE_CONTAINER_NAME $EXPOSED_PORTS $NAMESPACE/$IMAGE_NAME:$TAG

This piece may throw some people off base, and for good reason: "How come is it when I delete a container that I can no longer use that container's name?". Indeed, each time you apply an image to a cloud container, the cloud container needs a unique name that was never used before. Otherwise, you'll soon find that there will be an exception that the container already exists. It's a cloud repository after all. Therefore, assign a new name for a container each time you decide to run this command.


After this step you're done! You can of course, continue either by having access to a public IP address or by making code edits, and having to handle changes thereof.

SUPPORT

4) Access your public IP address if you don't have one already


$ ice ip request

The next step is self-explanatory. I hope you can foresee that you'll then have to assign this IP address to your cloud container for it to do anything useful. You may also derive a public IP address by utilizing the IBM Bluemix GUI instead of the CLI.


5) Bind/unbind IP address to a cloud container


$ ice ip unbind $PUBLIC_IP_ADDR $OLD_CONTAINER_NAME

$ ice ip bind $PUBLIC_IP_ADDR $UNIQUE_CONTAINER_NAME

In case you'd like to bind a public IP address, for instance to access various ports inside your application, you will have to bind them. I have been avoiding to run this step if another container is using the public IP address my clients utilize. This is preceded by unbind or stopping and removing the other container running the application. Of course, be careful when removing your container as it can no longer start. You may keep previous container(s) to revert in case your new adjustments have corrupted your code.


6) Logging


$ ice logs $UNIQUE_CONTAINER_NAME

The ability to log was a paramount concern. In addition to options such as SSH to your container, you can login to your IBM Bluemix from a Docker VM environment, and view your logs in real time.


7) Cleanup


$ ice rm $OLD_CONTAINER_NAME

Let's get rid of the previous, inactive, and unbound container for good measure and housekeeping.


To remove previous images, simply get rid of them from local and cloud


$ ice --local rmi $REGISTRY_URL/$NAMESPACE/$IMAGE_NAME:$TAG

$ ice --local rmi $NAMESPACE/$IMAGE_NAME:$TAG

$ ice --cloud rmi $NAMESPACE/$IMAGE_NAME:$TAG

Check out the BlueImage script for automation of simple deployments outlined above


20 August 2015

Reset Docker Connection to Wifi

Sometimes when you change your position, and hop on to a different wifi, Docker returns a TimeOut exception. The following series has worked for me:

$ docker-machine restart default      # Restart the environment
$ eval $(docker-machine env default)  # Refresh your environment settings

It's from a Stack Overflow post, and made for a nifty bash command in case I go somewhere different, and I need to connect to the Docker VM/environment.

19 August 2015

Configuring EC2 CLI on OSX

Make sure you have Java installed on your box.

Download and install EC2:

$ wget http://s3.amazonaws.com/ec2-downloads/ec2-api-tools.zip
$ sudo mkdir /usr/local/ec2
$ sudo unzip ec2-api-tools.zip -d /usr/local/ec2


Drill in to your ec2 directory to see version number

$ /usr/local/ec2/ec2-api-tools-1.7.5.0

Here, it's 1.7.5.0;

Go to https://console.aws.amazon.com/iam/home
There you will find or create your AWS credentials in the section "Security Credentials".
You will also have to assign permissions in the section "Permissions".

In your .bash_profile, add the following:

export JAVA_HOME=$(/usr/libexec/java_home)
export EC2_HOME=/usr/local/ec2/ec2-api-tools-1.7.5.0
export PATH=$PATH:$EC2_HOME/bin
export AWS_ACCESS_KEY=[KEY]
export AWS_SECRET_KEY=[SECRET]


You can test your EC2 configuration in a new Terminal window

$ ec2-describe-regions


This should generate an output of values similar to the following:

REGION eu-west-1 ec2.eu-west-1.amazonaws.com
REGION ap-southeast-1 ec2.ap-southeast-1.amazonaws.com
REGION ap-southeast-2 ec2.ap-southeast-2.amazonaws.com
REGION eu-central-1 ec2.eu-central-1.amazonaws.com
REGION ap-northeast-1 ec2.ap-northeast-1.amazonaws.com
REGION us-east-1 ec2.us-east-1.amazonaws.com
REGION sa-east-1 ec2.sa-east-1.amazonaws.com
REGION us-west-1 ec2.us-west-1.amazonaws.com
REGION us-west-2 ec2.us-west-2.amazonaws.com

Reminiscences of a Docker Operator

Docker takes a while to get used to.
I suggest you use the official node image.
Maybe make a Dockerfile that installs nodemon for live reload of your code
and use a volume to mount your code directly in it.
Use docker exec <container> bash to get a shell to your dev env
and you have yourself a fully isolated node env.
npm install to install all depencies.
As long as you keep that container around, it will keep the stuff you installed with npm, but if you destroy the container, it will go back to the state of the image.

17 August 2015

NodeMCU ADC dependency matrix

ADC required the RC module, and were added to the remainder of dependencies in the following list:

node,file,gpio,wifi,net,tmr,adc,uart,mqtt,cjson,rc,dht

16 August 2015

git add openshift remote

Step 1:
Find your Openshift app ssh via
$ rhc apps
and copy the Git url corresponding to the application of interest

Step 2:
You can then add that ssh value as a remote, named as openshift below
$ git remote add openshift -f "$SSH"

15 August 2015

MQTT broker with HTTP bridge

For those looking to develop a driver, bridge, or thin abstraction layer between their devices and services: all is right with the world.

Definitely check out, and brush up, on your Node.js-fu because it makes developing (i.e. prototyping and/or full scalable solutions) absolutely possible. This tidbit won't get in to the nitty gritty of how to maximize Node.js for multicore, however, those resources are available in the API documentation as well as other blog posts floating around the internets.

Here, I'll demonstrate a simple and effective way to utilize the Mosca framework alongside the Node.js http module to get dual MQTT:HTTP citizenship.

If you're interested to develop your own MQTT broker, and in Javascript, then look no further than Mosca. It's MQTT 3.1.1 compliant, and works with amazingly high fidelity; and in other words, it's quality stuff. Both Mosca and Node.js modules function via C++, and are pretty close to the metal. The abstraction provides plenty of bang for the buck.

Additionally, the npm dispatch module was used to provide some pretty useful routing capability. Here, the code is in single file mode: what you'd want to do is to call this server.js file from cluster, and to have your controllers tucked away neatly, and with the functions passed in lieu of the anonymous ones. Hopefully, these will get you started.

var authenticate = function (client, username, password, callback) {
 console.log('ping ', username);
    // if (username == "test" && password.toString() == "test")
        callback(null, true);
    // else
    //     callback(null, false);
}

var authorizePublish = function (client, topic, payload, callback) {
    callback(null, true);
}

var authorizeSubscribe = function (client, topic, callback) {
    callback(null, true);
}

var mosca = require('mosca');

var ascoltatore = {
 type: 'mongo',
 url: 'mongodb://localhost:27017/mqtt',
 pubsubCollection: 'ascoltatori',
 mongo: {}
};

var moscaSetting = {
 port: 1883,
 host: "192.168.foo.bar", // specify an host to bind to a single interface
 logger: {
  level: 'debug'
 },
 persistence: {
     factory: mosca.persistence.Mongo,
     url: 'mongodb://localhost:27017/mqtt'
   },
 backend: ascoltatore
};

var http     = require('http')
  , dispatch = require('dispatch')
  , broker = new mosca.Server(moscaSetting);

httpServ = http.createServer(
 dispatch({
        '/': function(req, res, next){
            console.log('alpha romeo');
        },
        '/user/:id': function(req, res, next, id){
            // ...
        },
        '/user/posts': function(req, res, next){
            // ...
        },
        '/user/posts/(\\w+)': function(req, res, next, post){
            // ...
        }
    })
);

broker.attachHttpServer(httpServ);

httpServ.listen(3000);

broker.on('ready', setup);

function setup() {
    broker.authenticate = authenticate;
    broker.authorizePublish = authorizePublish;
    broker.authorizeSubscribe = authorizeSubscribe;
    
    console.log('Mosca broker is up and running.');
}

broker.on("error", function (err) {
    console.log(err);
});

broker.on('clientConnected', function (client) {
    console.log('Client Connected \t:= ', client.id);
});

broker.on('published', function (packet, client) {
    console.log("Published :=", packet);
});

broker.on('subscribed', function (topic, client) {
    console.log("Subscribed :=", client.packet);
});

broker.on('unsubscribed', function (topic, client) {
    console.log('unsubscribed := ', topic);
});

broker.on('clientDisconnecting', function (client) {
    console.log('clientDisconnecting := ', client.id);
});

broker.on('clientDisconnected', function (client) {
    console.log('Client Disconnected     := ', client.id);
});

Check out the full project on Github