Wikipedia

Search results

21 September 2015

JS E Patterns

Events in JS are pretty amazing, and notifications or emits or emissions or what have you are quite often fundamental to programming with JS. In the case of UI it's literally every user interaction; and for pubsub mechanisms the same also holds true.

JS has a method Event.stopPropagation that here I will make the case introduces an anti-pattern to your code.

Here's the scenario:

user clicks submit on a form button, the code would look like the following:

submit.onclick = function(e) {
e.preventDefault() // similar method here
// .. do your thing
e.stopPropagation(); // stop furthering the event
}

1) Event occurs, and there is one file, and one method that the event is used.

Stopping event propagation here is nominal. This is a good use case, although it wouldn't make much difference to use stopPropagation.

2) You have multiple files that have multiple event listener methods.

Using stopEventPropagation should be avoided because it can act as an anti-pattern, here's how:

Let's say we have two different methods, and in two different files:

method1, file1

function eListener1(e, p) {
// tasks of eL1
}

and

method2, file2

function eListener2(e, p) {
// tasks of eL2
}

By definition, the layout of your code is completely different in the two scenarios .. one you're very much in the same place, and the other you leave to the great open yonder, not knowing exactly why or who is going to use something from somewhere as you're buidling.

To stop event propagation here is a mistake, and it would block tasks including delegation, and future use cases. For instance, suppose that your event is currently active in eL1 above; and now you want to create another event listener in another class or file that you will call eListener3 to perform some task on the same event. You would not be able to do this with stopPropagation; in effect you've limited your own possibilities, and another rate limiting step added to the soup is now that method that I will argue should have been avoided.

Stopping propagation is appropriate for a terminal event, such as deleting an item that has a known end; and not for general user interaction events. Signing out, for instance, could be another use case, however, would be redundant in many cases.

For general events, it's wasted energy. The consideration for stopEventPropagation at the general event level is perhaps at the heart of the rift between TCP or UDP versus TCP/IP.


In general, it's a good idea to avoid exotic methods unless you know exactly why you're using them. However, if you are building massively scaled TCP/IP request based systems that send large amounts of data, then this may not apply to you. 

A potential or real benefit of stopEventPropagation is saving computing and/or bandwidth resources.

So what's someone to do without stopping propagation?? Glad you asked, it's the simple conditional. The key here is to add a unique identifier to your object (i.e. this.supercalifradgilisticexpialidocious = 'here it is';), and in every one of your listeners you'd have your conditionals:

function eListener3(e, p) {
if (e === 'click') {
switch (p.foo.supercalifradgilisticexpialidocious) {
case 'here it is': {
// do func
break;
}

default: break;
}
}
}

05 September 2015

Local Sublime Text via remote SSH

Initially, for myself, and quite ashamedly as with many other facets .. I created an alias to access my SSH .. as a way to avoid lookup of the IP address you know? I cringe now, believe you me.

Well, then I needed to have access to Sublime Text in my remote environment as a way to make life a wee more convenient (i.e. in this case WAY more convenient), and figured to let you know how I did in case it may help.



There are good articles about how to configure your local and SSH environments with Sublime Text, such as this one written by Limina Studio and this one written by Daniel Demmel.

One other key piece of information is the function of the ~/.ssh/config file itself. There's a great writeup available from Nerderati available from this link.

The workflow is as follows:

1. Install the rsub plugin from the Package Control of Sublime or instructions provided from the public Github repository,

2. In your local environment edit your ~/.ssh/config file and specify your SSH IP address and User:
Host yourAlias
    HostName "$IP"
    User "$USER"
    RemoteForward 52698 127.0.0.1:52698
3. In your SSH environment, as root, do:
# wget -O /usr/local/bin/rsub https://raw.github.com/aurora/rmate/master/rmate
# chmod +x /usr/local/bin/rsub
# shutdown -r now
4. Henceforth, you may now access your SSH similar to what would be an alias:
$ ssh yourAlias
5. Restart Sublime, SSH in, and there will be an added advantage of being able to edit files via your local environment's Sublime Text editor.


01 September 2015

Guidelines for clinicians

Pleased to announce Guidelines a resource for clinicians.

This was a project long overdue.

1) Single site access to gold standard guidelines is surprisingly still not addressed

The purpose of Guidelines is very pure and simple: frequently or semi-frequently the world leading associations issue recommended guidelines that clinicians utilize on a daily basis to treat medically compromised patients.

The variety and scope of these publications may be wide, however, there are a number of well defined sources that are pre-selected as the authority or gold standard of care. Therefore, curation of these Guidelines is not as labor intensive or time consuming to warrant an establishment for their organization; and thus, remain neglected.

2) Clinicians should conveniently be able to access Guidelines

Of anyone with their plate already full, the clinician will foreseeably be too consumed in other activities to maintain a dedicated, and convenient, point of entry towards their access to the most current, and relevant, standards.

The clinician may not even have convenient access to a list of gold standard resources they once may have had. Once a clinician enters private practice, they risk becoming isolated and sandboxed out of such resources that may often go unnoticed or unappreciated.

As a result, the clinician may compromise their ability to receive a thorough, and up to date, criteria or list of standards they may study. Subsequently, patients are the population who bear the results of this risk.

3) When clinicians have access to resources, patients benefit

The sole purpose of a clinical practice, and hence Guidelines, is to benefit the patient.

There should not be an issue that exists today caused by a barrier of access to information that is readily available over the wire. Therefore, instruments such as Guidelines exist to facilitate the transfer of information to maintain a peer reviewed, updated, thorough, and accessible list of resources available to clinicians via a multitude of avenues.

OPEN CALL FOR CLINICAL SCIENCE CONTRIBUTORS

ALL DISCIPLINES OF HEALTH CARE


PROJECT AVAILABLE ON GITHUB FOR IMMEDIATE CONTRIBUTION





30 August 2015

Step by step breakdown of IBM Bluemix image deploy

Step by step breakdown of IBM Bluemix


1) Creating local image


$ ice --local build -t $IMAGE_NAME:$TAG $DIR

The first step of working with Bluemix is to create a local image. This step is analogous to creating an image in Docker. The main difference is that the build produced here can be pushed to the IBM Bluemix cloud repository. Bluemix can connect to Dockerhub, however, it can also host your images privately out of the box, and sync with your local Docker daemon images.

SUPPORT

2) Tag and Push


$ ice --local tag $IMAGE_NAME $REGISTRY_URL/$NAMESPACE/$IMAGE_NAME:$TAG

$ ice --local push$REGISTRY_URL/$NAMESPACE/$IMAGE_NAME:$TAG

The local image is now tagged, and pushed to the IBM Bluemix cloud repository for use to add to a container in the following step.

SUPPORT

3) Create and run container


$ ice run --name $UNIQUE_CONTAINER_NAME $EXPOSED_PORTS $NAMESPACE/$IMAGE_NAME:$TAG

This piece may throw some people off base, and for good reason: "How come is it when I delete a container that I can no longer use that container's name?". Indeed, each time you apply an image to a cloud container, the cloud container needs a unique name that was never used before. Otherwise, you'll soon find that there will be an exception that the container already exists. It's a cloud repository after all. Therefore, assign a new name for a container each time you decide to run this command.


After this step you're done! You can of course, continue either by having access to a public IP address or by making code edits, and having to handle changes thereof.

SUPPORT

4) Access your public IP address if you don't have one already


$ ice ip request

The next step is self-explanatory. I hope you can foresee that you'll then have to assign this IP address to your cloud container for it to do anything useful. You may also derive a public IP address by utilizing the IBM Bluemix GUI instead of the CLI.


5) Bind/unbind IP address to a cloud container


$ ice ip unbind $PUBLIC_IP_ADDR $OLD_CONTAINER_NAME

$ ice ip bind $PUBLIC_IP_ADDR $UNIQUE_CONTAINER_NAME

In case you'd like to bind a public IP address, for instance to access various ports inside your application, you will have to bind them. I have been avoiding to run this step if another container is using the public IP address my clients utilize. This is preceded by unbind or stopping and removing the other container running the application. Of course, be careful when removing your container as it can no longer start. You may keep previous container(s) to revert in case your new adjustments have corrupted your code.


6) Logging


$ ice logs $UNIQUE_CONTAINER_NAME

The ability to log was a paramount concern. In addition to options such as SSH to your container, you can login to your IBM Bluemix from a Docker VM environment, and view your logs in real time.


7) Cleanup


$ ice rm $OLD_CONTAINER_NAME

Let's get rid of the previous, inactive, and unbound container for good measure and housekeeping.


To remove previous images, simply get rid of them from local and cloud


$ ice --local rmi $REGISTRY_URL/$NAMESPACE/$IMAGE_NAME:$TAG

$ ice --local rmi $NAMESPACE/$IMAGE_NAME:$TAG

$ ice --cloud rmi $NAMESPACE/$IMAGE_NAME:$TAG

Check out the BlueImage script for automation of simple deployments outlined above


20 August 2015

Reset Docker Connection to Wifi

Sometimes when you change your position, and hop on to a different wifi, Docker returns a TimeOut exception. The following series has worked for me:

$ docker-machine restart default      # Restart the environment
$ eval $(docker-machine env default)  # Refresh your environment settings

It's from a Stack Overflow post, and made for a nifty bash command in case I go somewhere different, and I need to connect to the Docker VM/environment.

19 August 2015

Configuring EC2 CLI on OSX

Make sure you have Java installed on your box.

Download and install EC2:

$ wget http://s3.amazonaws.com/ec2-downloads/ec2-api-tools.zip
$ sudo mkdir /usr/local/ec2
$ sudo unzip ec2-api-tools.zip -d /usr/local/ec2


Drill in to your ec2 directory to see version number

$ /usr/local/ec2/ec2-api-tools-1.7.5.0

Here, it's 1.7.5.0;

Go to https://console.aws.amazon.com/iam/home
There you will find or create your AWS credentials in the section "Security Credentials".
You will also have to assign permissions in the section "Permissions".

In your .bash_profile, add the following:

export JAVA_HOME=$(/usr/libexec/java_home)
export EC2_HOME=/usr/local/ec2/ec2-api-tools-1.7.5.0
export PATH=$PATH:$EC2_HOME/bin
export AWS_ACCESS_KEY=[KEY]
export AWS_SECRET_KEY=[SECRET]


You can test your EC2 configuration in a new Terminal window

$ ec2-describe-regions


This should generate an output of values similar to the following:

REGION eu-west-1 ec2.eu-west-1.amazonaws.com
REGION ap-southeast-1 ec2.ap-southeast-1.amazonaws.com
REGION ap-southeast-2 ec2.ap-southeast-2.amazonaws.com
REGION eu-central-1 ec2.eu-central-1.amazonaws.com
REGION ap-northeast-1 ec2.ap-northeast-1.amazonaws.com
REGION us-east-1 ec2.us-east-1.amazonaws.com
REGION sa-east-1 ec2.sa-east-1.amazonaws.com
REGION us-west-1 ec2.us-west-1.amazonaws.com
REGION us-west-2 ec2.us-west-2.amazonaws.com

Reminiscences of a Docker Operator

Docker takes a while to get used to.
I suggest you use the official node image.
Maybe make a Dockerfile that installs nodemon for live reload of your code
and use a volume to mount your code directly in it.
Use docker exec <container> bash to get a shell to your dev env
and you have yourself a fully isolated node env.
npm install to install all depencies.
As long as you keep that container around, it will keep the stuff you installed with npm, but if you destroy the container, it will go back to the state of the image.

17 August 2015

NodeMCU ADC dependency matrix

ADC required the RC module, and were added to the remainder of dependencies in the following list:

node,file,gpio,wifi,net,tmr,adc,uart,mqtt,cjson,rc,dht

16 August 2015

git add openshift remote

Step 1:
Find your Openshift app ssh via
$ rhc apps
and copy the Git url corresponding to the application of interest

Step 2:
You can then add that ssh value as a remote, named as openshift below
$ git remote add openshift -f "$SSH"

15 August 2015

MQTT broker with HTTP bridge

For those looking to develop a driver, bridge, or thin abstraction layer between their devices and services: all is right with the world.

Definitely check out, and brush up, on your Node.js-fu because it makes developing (i.e. prototyping and/or full scalable solutions) absolutely possible. This tidbit won't get in to the nitty gritty of how to maximize Node.js for multicore, however, those resources are available in the API documentation as well as other blog posts floating around the internets.

Here, I'll demonstrate a simple and effective way to utilize the Mosca framework alongside the Node.js http module to get dual MQTT:HTTP citizenship.

If you're interested to develop your own MQTT broker, and in Javascript, then look no further than Mosca. It's MQTT 3.1.1 compliant, and works with amazingly high fidelity; and in other words, it's quality stuff. Both Mosca and Node.js modules function via C++, and are pretty close to the metal. The abstraction provides plenty of bang for the buck.

Additionally, the npm dispatch module was used to provide some pretty useful routing capability. Here, the code is in single file mode: what you'd want to do is to call this server.js file from cluster, and to have your controllers tucked away neatly, and with the functions passed in lieu of the anonymous ones. Hopefully, these will get you started.

var authenticate = function (client, username, password, callback) {
 console.log('ping ', username);
    // if (username == "test" && password.toString() == "test")
        callback(null, true);
    // else
    //     callback(null, false);
}

var authorizePublish = function (client, topic, payload, callback) {
    callback(null, true);
}

var authorizeSubscribe = function (client, topic, callback) {
    callback(null, true);
}

var mosca = require('mosca');

var ascoltatore = {
 type: 'mongo',
 url: 'mongodb://localhost:27017/mqtt',
 pubsubCollection: 'ascoltatori',
 mongo: {}
};

var moscaSetting = {
 port: 1883,
 host: "192.168.foo.bar", // specify an host to bind to a single interface
 logger: {
  level: 'debug'
 },
 persistence: {
     factory: mosca.persistence.Mongo,
     url: 'mongodb://localhost:27017/mqtt'
   },
 backend: ascoltatore
};

var http     = require('http')
  , dispatch = require('dispatch')
  , broker = new mosca.Server(moscaSetting);

httpServ = http.createServer(
 dispatch({
        '/': function(req, res, next){
            console.log('alpha romeo');
        },
        '/user/:id': function(req, res, next, id){
            // ...
        },
        '/user/posts': function(req, res, next){
            // ...
        },
        '/user/posts/(\\w+)': function(req, res, next, post){
            // ...
        }
    })
);

broker.attachHttpServer(httpServ);

httpServ.listen(3000);

broker.on('ready', setup);

function setup() {
    broker.authenticate = authenticate;
    broker.authorizePublish = authorizePublish;
    broker.authorizeSubscribe = authorizeSubscribe;
    
    console.log('Mosca broker is up and running.');
}

broker.on("error", function (err) {
    console.log(err);
});

broker.on('clientConnected', function (client) {
    console.log('Client Connected \t:= ', client.id);
});

broker.on('published', function (packet, client) {
    console.log("Published :=", packet);
});

broker.on('subscribed', function (topic, client) {
    console.log("Subscribed :=", client.packet);
});

broker.on('unsubscribed', function (topic, client) {
    console.log('unsubscribed := ', topic);
});

broker.on('clientDisconnecting', function (client) {
    console.log('clientDisconnecting := ', client.id);
});

broker.on('clientDisconnected', function (client) {
    console.log('Client Disconnected     := ', client.id);
});

Check out the full project on Github

31 January 2015

Optional Javascript Function Parameters

Optional parameters can be included in Javascript functions without the need to specify a default value. Some view this as a bug in Javascript that can miss errors, while others view it as a flexibility similar to what can be observed in life.

>>> def xyz (a, b, c=None):
...   print (a)
...   print (b)
...   if (c):
...     print (c)
... 
>>> xyz(1,2,3)
1
2
3
>>> xyz(1,2)
1
2


Without specifying a default 'None' value, the function would throw an exception if there is arity mismatch, as would be in C, ML, and just about every other strong or static procedural or functional language. In Javascript, however, the parameter's use is only bound to their reliance within the function. It would be

> function xyz (a, b, c) {
... console.log(a)
... console.log(b)
... if (c) { console.log(c) }
... }
> xyz(1,2,3)
1
2
3
> xyz(1,2)
1
2