Automated Deployments and Developmental pipeline for loklak server

Loklak Server project has been growing and seeing an increase in the users consuming the APIs, the contributors and has been extending its reach into other territories like IOT and Artificial Intelligence chats. As the project grew, It became important to keep the server easily deployable which was done previously by integrating the one click button deployment procedures to Loklak Server for anybody to spin up their own servers.

As we grew we made quite a few mistakes in the development, overriding others’ work, conflicting patches and the system kept breaking as we migrated from Java 7 to Java 8. To avoid problems due to an increase in contributions and a lot of members working together, there needed to be a stronger engineering workflow to ensure that the development still goes unhampered and there’s lesser time taken to pull and review.

View post on imgur.com

We’ve strongly adopted the build and release per commit where instead of periodically taking the upstream changes and deploying onto the server we now leverage existing continuous integration tools that we’ve employed to run the builds for us to also perform the deployments onto the staging / dev servers. This was done using Heroku and Travis, where every successful travis build runs a trigger to Heroku to deploy and run the server on the staging server. This has dramatically reduced the errors that we encountered before and also proved as the testing ground for new features before moving them to the production server at loklak.org

Implementation


deploy:
provider: heroku
email: [email protected]
strategy: git
buildpack: https://github.com/loklak/heroku_buildpack_ant_loklak
api_key:
secure: D2o+G28w42F9rDbde......PL/Q=
app: loklak-server-dev
on:
branch: development

Automated Deployments and Developmental pipeline for loklak server

Visualizing NMEA Datasets from GPS Tracking devices with Loklak

Loklak now supports the NMEA format and gives the developers access to the format in a much more friendlier JSON format which can be readily plugged in into the required map visualizer and usable on the web to create dashboards.

The stream URL is the data URL to which the GPS devices are streaming the data which need to be read by Loklak converted and then reused. For a given stream the response is as follows

{
"1": {
"lat": 0,
"long": 0,
"time": 0,
"Q": 0,
"dir": 0,
"alt": 0,
"vel": 0
},
"2": {
"lat": 0,
"long": 0,
"time": 0,
"Q": 0,
"dir": 0,
"alt": 0,
"vel": 0
},
"3": {
"lat": 0,
"long": 0,
"time": 0,
"Q": 0,
"dir": 0,
"alt": 0,
"vel": 0
},
"4": {
"lat": 0,
"long": 0,
"time": 0,
"Q": 0,
"dir": 0,
"alt": 0,
"vel": 0
},
"5": {
"lat": 0,
"long": 0,
"time": 0,
"Q": 0,
"dir": 0,
"alt": 0,
"vel": 0
},
"6": {
"lat": 0,
"long": 0,
"time": 0,
"Q": 0,
"dir": 0,
"alt": 0,
"vel": 0
}
}

We now need to visualize this information. Loklak now has a built in tool which can make this happen present here. You’ll see a screen which asks for the stream URL, Provide the URL where the GPS device is sending the information
NMEA App Loklak
This visualizes the information
Visualized points on Map
This is possible with the data format the NMEA.txt servlet is performing and the client side packaging of the objects while loading them onto a map.

function getTracking() {
var url = document.getElementById('url').value;
var cUrl = window.location.href;
var pName = window.location.pathname;
var baseUrl = cUrl.split(pName)[0]
var urlComplete = baseUrl+'/api/nmea.txt?stream='+url;
var centerlat = 52;
var centerlon = 0;
// set default zoom level
var zoomLevel = 2;
$.getJSON(urlComplete, function (data) {
var GPSObjects = [];
for (var key in data) {
var obj = data[key];
var latitudeObject, longitudeObject
for (var prop in obj) {
if (prop == 'lat') {
latitudeObject = obj[prop];
}
if (prop == 'long') {
longitudeObject = obj[prop];
}
}
var marker = L.marker([latitudeObject, longitudeObject]).addTo(map);
spiralCoords = connectTheDots(marker);
var spiralLine = L.polyline(spiralCoords).addTo(map)
}
});
}

A request is made to the required NMEA data stream and stored into the required latitude and longitude which is then each individually pushed onto the map.

Loklak is now slowly moving towards supporting multiple devices and visualizing the data that it obtains from the device data streams. The NMEA is a global standard for GPS and any tracking devices therefore loklak actively supports more than 10 devices Garmin 15, 15H etc.., from the Garmin series, Xexun 10X series and the Navbie series.

Visualizing NMEA Datasets from GPS Tracking devices with Loklak

Loklak now supports IOT streams from GPS Devices

GPS is one of the most widely used techniques for geolocation. There are a lot of commercial GPS tracking and reporting devices that are available in the market like the Xexun 10X series, Garmin GPS Devices and lots of other open source devices like GPS Cookie and GPS Sandwich. GPS is a satellite based navigation system which accurately positions a person. These tracking devices generally have 2 functionalities STORE and STREAM.

The STORE stores the NMEA GPS data onto the SD Card / storage device present on the device. This information can later be visualized or converted into a specific format as the NMEA.txt call does on the loklak server. The STREAM service takes the data and uses a GPRS mode of communication to send the NMEA Sentences to the designated server to which it has been configured. There has to be a node listening on the port so that the devices can stream the information to that port, or it can stream to a specific URL and Loklak can use the `stream=` in its GET request to make the required conversions into visualizer data so that the NMEA Sentences can be visualized.

A sample NMEA sentence looks as follows

/*
Sample Read Format
==================
  1      2    3        4 5         6 7     8     9      10    11
$xxxxx,123519,A,4807.038,N,01131.000,E,022.4,084.4,230394,003.1,W*6A
  |      |    |        | |         | |     |     |      |     |
  |      |    |        | |         | |     |     |      |     Variation sign
  |      |    |        | |         | |     |     |      Variation value
  |      |    |        | |         | |     |     Date DDMMYY
  |      |    |        | |         | |     COG
  |      |    |        | |         | SOG
  |      |    |        | |         Longitude Sign
  |      |    |        | Longitude Value
  |      |    |        Latitude Sign
  |      |    Latitude value
  |      Active or Void
  UTC HHMMSS
 */

Making a query on the given stream of such sentences on loklak returns a response as follows

{
		  "1": {
		    "lat": 60.073490142822266,
		    "long": 19.67548179626465,
		    "time": 94154,
		    "Q": 2,
		    "dir": 82.0999984741211,
		    "alt": 0.699999988079071,
		    "vel": 1.7999999523162842
		  },
		  "2": {
		    "lat": 60.07347106933594,
		    "long": 19.675262451171875,
		    "time": 94140,
		    "Q": 2,
		    "dir": 82.0999984741211,
		    "alt": 0.10000000149011612,
		    "vel": 1.7999999523162842
		  }
		}

Some devices give only a few variables in the sentence whereas some provide even information such as direction , heading and velocity. Hence there’s a need for handling multiple types of sentences in the NMEA Parsers. This is done by implementing an overloaded function parse() in the NMEAServlet which works as follows

class GPVTG implements SentenceParser {
		public boolean parse(String [] tokens, GPSPosition position) {
			position.dir = Float.parseFloat(tokens[3]);
			return true;
		}
	}

For a new NMEA Sentence format for example XXXXX We need to add a few lines to enhance the parser as follows by using the SentenceParser interfaces.

class XXXXX implements SentenceParser {
		public boolean parse(String [] tokens, GPSPosition position) {
			position.dir = Float.parseFloat(tokens[X]);
			return true;
		}
	}

This is a great milestone for loklak because it now supports IOT devices to stream to it and has the required conversion logic which converts the Sentence streams/logs into the requires JSON format so that it can be visualized. The NMEA Visualizer is coming soon where you can enter a stream URL or attach a log and it performs the required operations.

Cheers !

Loklak now supports IOT streams from GPS Devices

Dockerize the loklak server and publish docker images to IBM Containers on Bluemix Cloud

Docker is an open source platform which makes life easy for system developers and sysadmins to build, ship and run distributed applications. Unlike Virtual Machines (VMs) Docker containers allows you to package an application with all of its dependies into a standardized unit for software development.

docker_container_logo

To create an image of the loklak server, the first thing you need is to run the docker daemon process. This then boots up a blank docker instance (called a `default` image) such that the images could be stored. To run loklak in a container, we first need to create a new image for loklak server. This includes the base operating system, the required updates for that operating system as well as the build instructions for running the application. To run the loklak server we need to clone the loklak_server from `loklak/loklak_server` and then build and compile using `ant`.

Once the docker daemon’s default image has booted, Create an new directory called `Docker` and `cd` into that directory. Inside that create a directory `loklak` so that you could store the `loklak` image that you would create. To do this you need to create a Dockerfile mentioning the operating system and the different instructions that need to be executed.

FROM ubuntu:latest
MAINTAINER Ansgar Schmidt <[email protected]>
ENV DEBIAN_FRONTEND noninteractive

# update
RUN apt-get update
RUN apt-get upgrade -y

# add packages
RUN apt-get install -y git ant openjdk-8-jdk

# clone the github repo
RUN git clone https://github.com/loklak/loklak_server.git
WORKDIR loklak_server

# compile
RUN ant

# Expose the web interface ports
EXPOSE 80 443

# change config file
RUN sed -i.bak 's/^(port.http=).*/180/'                conf/config.properties
RUN sed -i.bak 's/^(port.https=).*/1443/'              conf/config.properties
RUN sed -i.bak 's/^(upgradeInterval=).*/186400000000/' conf/config.properties

# hack until loklak support no-daemon
RUN echo "while true; do sleep 10;done" >> bin/start.sh

# start loklak
CMD ["bin/start.sh"]

This fetches the latest ubuntu image and provides a non-interactive Debian front end. Docker files are executed step by step. So in the above docker file for loklak, the machine image is first pulled over and then the operting system image us updated and the distribution upgraded. Once this is done the dockerfile then runs `RUN apt-get install -y git ant openjdk-8-jdk` to install ant, git and java8 JDK. Once this is complete, the repository for the loklak server is closed and the container opens ports 80 and 443 to run it on HTTP and HTTPS port. The `sed` commands find and replace the settings for the HTTP port and HTTPS port on the configuration file of the loklak server. Once this is cloned, we run `ant` and execute `bin/start.sh` to start the loklak server.

Since now we understand how the docker file runs, lets find out how the docker is actually executed.
`docker images` to see the list of images that you have.

Now create the loklak image by doing `docker build -t loklak .` which uses the `Dockerfile` in the current directory to build the image in that directory.

The command takes several seconds to run and reports its outcome. During this process, it fetches each of the image layers and puts these image layers together to create your docker container. Once it’s installed, you can run it by doing `docker run loklak`. This has successfully created your image and it’s ready to push it to dockerhub after creating a namespace and renaming the image as `useraccountname/loklak` and then pushing to docker hub.

To push these to the IBM Bluemix cloud, you need to have `cloudfoundry` command line tools `cf`. Then install the plugin for containers `cf install-plugin https://static-ice.ng.bluemix.net/ibm-containers-linux_x64`

Once this is installed, connect to the IBM Bluemix cloud under a namespace and pull the required image you pushed to github as `cf ic cpi username/loklak loklak` which copies the image you pushed to dockerhub to the IBM Containers and run the container.

Containers make life really simple for devops and to create and deploy instantly as well as scale on demand.

Dockerize the loklak server and publish docker images to IBM Containers on Bluemix Cloud

IOT Data push in different formats now supported

Lots of IOT devices, that are out there currently log a lot of information as CSV files. At the same time there’s a lot of data generated to data stream services from devices and satellites alike that are present in XML format. A lot of the structured storage data is exported by developers into formats like XML on most IOT devices and hence these dashboards contain options like export to XML/CSV/JSON formats.

The loklak server can store JSON formats but since the devices can send out any type of information, it becomes important for the server to support multiple data formats for both the streams as well as device pushes to the server. Over the last week we’ve had integrations to parse XML contents and convert these to JSON and publically expose them as an API endpoint available at /api/xml2json.json and data being the GET/POST parameter that needs to be sent out. This allows a lot of clients to be built around this which convert XML to JSON. We’ll soon have an app to show how this is possible and integrations into the LQL application which is an application meant for making loklak API easier for developers who are trying to use and access loklak data and the server.

Similarly the support for CSV to JSON formats is also now supported. A major problem that we hit here while development is the conversion schemes, the way special characters like carriage returns, new lines etc.., r, n etc.., are parsed and converted, Directly passing these in the parameter data string makes it hard to replace and takes up valuable string searching and processing. To overcome this, the client itself encodes the URI fields r as %0A%0D  and n as %0A accordingly and pass the data strings to the publically exposed /api/csv2json.json and data being it’s GET/POST parameters with CSV encoded information. Every CSV sent is converted and returned as a JSON which can be caught by the corresponding .success() equivalent functions of the programming language being used by the developer of the IOT device / service owner and retrigger a push response to the loklak server making this data harvesting as a completely synchronous and step by step process because of data format conversions.

This data API is publicly accessible which means it can potentially support a lot of applications that are currently trying out to use their own lexical parsers and tokenizers for making the data conversions.

For eg.

twittername, name, org
0rb1t3r, Michael Peter Christen, FOSSASIA
sudheesh001, Sudheesh Singanamalla, FOSSASIA

in the URL

http://localhost:9000/api/csv2json.json?data=twittername,name,org%0A0rb1t3r,Michael%20Peter%20Christen,FOSSASIA%0Asudheesh001,Sudheesh%20Singanamalla,FOSSASIA%0A

Resulting in the json format as follows

[
  {
    "org": "FOSSASIA",
    "name": "Michael Peter Christen",
    "twittername": "0rb1t3r"
  },
  {
    "org": "FOSSASIA",
    "name": "Sudheesh Singanamalla",
    "twittername": "sudheesh001"
  }
]

The CDL.java, XML.java files contain the required helper classes and methods needed for the required operations, this makes string conversions anywhere in loklak server if needed as simple as String jsonData = XML.toJSONObject(data).toString(); and JSONArray array = CDL.toJSONArray(json); for the corresponding XML and CSV conversions respectively.

The Tokeners, parse and pick out tokens matching

static {
       entity = new java.util.HashMap(8);
       entity.put("amp",  XML.AMP);
       entity.put("apos", XML.APOS);
       entity.put("gt",   XML.GT);
       entity.put("lt",   XML.LT);
       entity.put("quot", XML.QUOT);
   }

The XML parser looks for the starting angle brackets
< and ending angle brackets > and parses the content in between them finding out the contents in between the XML tags, Once this information is obtained as the key value, it looks for the format of >...</ and retrieves the value of the given XML tag. It gets pushed into a JSON object and forms a key:value pair. A collection of such objects are created for every complete XML structure given the DTD Schema.

These additions prove very useful to the IOT devices and services whose data is going to be integrated into the loklak server. The next task is to take up the Home automation system from IBM Watson cloud and make it parallely push the data to loklak now that we support so many data formats. We are very close to supporting real time applications streaming data to the loklak server.

IOT Data push in different formats now supported

Loklak API SDK Now supports golang

The Go programming language is quite a recent language. It’s a statically typed and compiled language un like Python or other scripting languages, It has a greater memory safety compared to C, supports garbage collection and an inbuilt support to make http and network requests using the "net/http" and "net/url" packages that are present in it. Go is scalable to very large systems like Java and C++. It also makes things productive and is easily readable because of the far lesser number of keywords that it has.

golang

Some of the key things that we notice with Golang coming from a C/C++ background is that there’s nothing called a class. Wait, what ? Exactly, you read it right, Go considers class to be mentioned as a struct . So the loklak object data structure which we will be using throughout golang’s support for the various API requests is as follows

import (
    "encoding/json"
    "fmt"
    "net/http"
    "net/url"
    "os"

    "github.com/hokaccha/go-prettyjson"
)

So in golang, it’s recommended to have the built in packages first followed by the libraries that you’re using remotely, we’re using the pretty print json library which is at github.com/hokaccha/go-prettyjson. So since we follow the DRY (Don’t Repeat Yourself) method, we write a public function called getJSON as follows

func getJSON(route string) (string, error) {
	r, err := http.Get(route)
	if err != nil {
		return "", err
	}
	defer r.Body.Close()

	var b interface{}
	if err := json.NewDecoder(r.Body).Decode(&b); err != nil {
		return "", err
	}
	out, err := prettyjson.Marshal(b)
	return string(out), err
}

If you’re coming from a C++/C background you’d notice something really odd about this, the return types need to be mentioned in the function header itself, so a function func getJSON(route string) (string, error) takes a string by the variable route as input and returns two values (string, error) as the return types. The above code takes the request URL and returns the corresponding JSON response.

Golang methods are generally not a very preferred type in such REST based scenario API development like that of Loklak, hence most of the queries can be directly made using functions. But we initially have a function for a method where we can set the loklak server URL to take.

// Initiation of the loklak object
func (l *Loklak) Connect(urlString string) {
	u, err := url.Parse(urlString)
	if (err != nil) {
		fmt.Println(u)
		fatal(err)
	} else {
		l.baseUrl = urlString
	}
}

So this takes a string urlString as an parameter and creates a method called Connect() by using a loklak Object, this updates the base URL field of the loklak object. This is obtained as follows in the main package.

loklakObject := new(Loklak)
loklakObject.Connect("http://loklak.org/")

The Go language has built-in facilities, as well as library support, for writing concurrent programs. Concurrency refers not only to CPU parallelism, but also to asynchrony: letting slow operations like a database or network-read run while the program does other work, as is common in event-based servers. These could prove to be very useful in building apps with Go using loklak and to use loklak data, the high parallelism can be very useful for developers using data from loklak and building applications based on this.

Some very interesting things in golang is that golang doesn’t support function overloading or default parameters, hence the search API which was implemented using default parameters in PHP and Python can’t be implemented that way in Go. This has been tackled by using the search() function and prepackaging the request to be made to it as a loklak object.

// Search function is implemented as a function and not as a method
// Package the parameters required in the loklak object and pass accordingly
func search (l *Loklak) (string) {
	apiQuery := l.baseUrl + "api/search.json"
	req, _ := http.NewRequest("GET",apiQuery, nil)

	q := req.URL.Query()
	
	// Query constructions
	if l.query != "" {
		constructString := l.query
		if l.since != "" {
			constructString += " since:"+l.since
		}
		if l.until != "" {
			constructString += " until:"+l.until
		}
		if l.from_user != "" {
			constructString += " from:"+l.from_user
		}
		fmt.Println(constructString)
		q.Add("q",constructString)
	}
	if l.count != "" {
		q.Add("count", l.count)
	}
	if l.source != "" {
		q.Add("source", l.source)
	}
	req.URL.RawQuery = q.Encode()
	queryURL := req.URL.String()
	out, err := getJSON(queryURL)
	if err != nil {
		fatal(err)
	}
	return out
}

To use this search capability, one needs to create and package the requested parameters and then call this function with the loklak query.

func main() {
	loklakObject := new(Loklak)
	loklakObject.Connect("http://loklak.org/")
	loklakObject.query = "fossasia"
	loklakObject.since = "2016-05-12"
	loklakObject.until = "2016-06-02"
	loklakObject.count = "10"
	loklakObject.source = "cache"
	searchResponse := search(loklakObject)
	fmt.Println(searchResponse)
}

The golang API has a great potential in aiding loklak server and being used by applications running golang and looking to build highly scalable and high performance applications using loklak. The API is available on github, feel free to open an issue in case you find a bug or need an enhancement

Loklak API SDK Now supports golang

Developer Tools: Build your query using LQL

Writing request queries is definitely a hard job for developers trying to use any API. Sometimes the query strings go wrong and sometimes you’re not getting the output you’ve been expecting. We understood a similar problem in Loklak API for developers and built the Loklak Query Language (LQL).

This tool takes in the fields and the type of query you want to make and dynamically creates the request URL in front of you. You could even test this URL and see the pretty printed JSON responses returned by the server. Here’s the best part, You get to use this with the custom URL of any loklak server that you deploy.

screenshot

The team has put in quite some effort in scripting easy deployment buttons with Heroku, Bluemix, Scalingo, Docker Cloud etc.., and Tools like this help developers build the queries and look for the data they wanted.

Screen Shot 2016-06-01 at 9.09.29 AM

There are a lot of features that are in store and shows the query for a lot of API endpoints. It’s a great tool to play with, In the future, we’d also be integrating the way the queries have to be made for different programming language APIs that we support. So that way you can directly access the required code segments directly and use them with the supporting library.

It’s a lightweight application, every time a change is made in any of the fields in the form, the query gets generated completely on the client side and at the same time the fields change based on the API call that has been chosen.

Have a look at the LQL here or head over to our github and give us feedback or open an issue.

Developer Tools: Build your query using LQL

Loklak Tweets – Tweets and much much more !

The last update when the tweets functionality was implemented, it was able to post a regular tweet from the loklak webclient application. The tweets functionality has undergone massive changes and multiple code refactor so that there are more features along with the usual features implemented by Twitter, them being

  1. The ability to post a map as an attachment to the tweet.
  2. The ability to post large text attachment 0r markdown to twitter.

Once the user is loggedin, the user can click on the tweet functionality where he’d be provided with a lot of options, post a regular tweet, a tweet with an attached image, a tweet with map attachment or a tweet with markdown.

 

Screen Shot 2015-08-25 at 7.27.04 pm

Ability to compose new tweets with map attachments

Screen Shot 2015-08-25 at 7.28.22 pm

Ability to write markdown code as attachment to the tweets.

Screen Shot 2015-08-25 at 7.28.30 pm

Live view of the markdown text rendered on the screen so that the user can see the large text content he/she is attaching as markdown text to twitter.

The use can also post their location via loklak to twitter similar to how twitter uses the geolocation service, the navigator request takes the coordinates and reverse looks up the database so that the location name and the nearby areas can be retrieved.

Large text tweet with markdown content

Map attachment tweets

Image upload tweet

All the tweets that are posted over here are cross posted to loklak to make it easier for the server to harvest the tweets, We’ve moved to loklak.net from test.loklak.net after a lot of testing to make sure that the application performs well under heavy load. We have collected more than 42 million tweets so far on the main server.

Loklak Tweets – Tweets and much much more !