Snackbar for Error Handling in Twitter Followers Insight loklak App

In this blog post am going to explain how the Twitter Followers Insight app handles error which occurs when when no query is passed to the “Search” function i.e., when the search is proceeded with an empty query.

How to handle the Exception

In such cases, I have used SNACKBAR / TOAST. Snackbar / Toast is used to popup an error notification on the screen when an exception occurs.

Script for Snackbar / Toast:

In the below script, initially the error is set to null i.e., no error. The “showError” function is which is being called when there occurs a situation of no query or an empty query. The function below helps to show an error popup which only shows till a time limit.

    $scope.error = null;

    $scope.showError = function() {
        $(".snackbar").addClass("show");
        setTimeout(function(){ $(".snackbar").removeClass("show") }, 3000);
    }

 

In this script, it checks whether the query passed is undefined or empty. If yes, then the error message which was null earlier is changed and that error is showed up on the screen to the end user. Snackbars animate upwards from the edge of the screen.

        if ($scope.query === '' || $scope.query === undefined) {
            $scope.spinner = false;
            $scope.error = "Please enter a valid Username";
            $scope.showError();
            return;
        }

 

Resources:

Snackbar for Error Handling in Twitter Followers Insight loklak App

Updating the UI of the generator form in Open Event Webapp

  • Add a pop-up menu bar similar to the one shown in Google/Susper6be8d972-72bc-4e12-b27a-219e46608cfc.png
  • Add a version deployment link at the bottom of the page like the one shown in staging.loklak.org.

29072668-b1db-4ef5-8865-c71ef2438433.png

 

  • Implementing the top-bar and the pop-up menu bar

The first task was to introduce a top-bar and a pop-up menu bar in Generator. The top-bar would contain the text Open Event Webapp Generator and an icon button on the right side of it which would show a pop-up menu. The pop-up menu would contain a number of icons which would link to different pages like FOSSASIA blogs and it’s official website, different projects like loklak, SUSI and Eventyay and also to the Webapp Project Readme and issues page.

Creating a top navbar is easy but the pop-up menu is a comparatively tougher. The first step was to gather the gather the small images of the different services. Since this feature had already been implemented in Susper project, we just copied all the icon images from there and copy it into a folder named icons in the open event webapp. Then we create a custom menu div which would hold all the different icons and present it an aesthetic manner. Write the HTML code for the menu and then CSS to decorate and position it! Also, we have to create a click event handler on the pop-up menu button for toggling the menu on and off.

Here is an excerpt of the code. The whole file can be seen here

<div class="custom-navbar">
 <a href='.' class="custom-navtitle">
   <strong>Open Event Webapp Generator</strong> <!-- Navbar Title -->
 </a>
 <div class="custom-menubutton">
   <i class="glyphicon glyphicon-th"></i> <!-- Pop-up Menu button -->
 </div>
 <div class="custom-menu"> <!-- Custom pop-up menu containing different links -->
   <div class="custom-menu-item">
     <a class="custom-icon" href="http://github.com/fossasia/open-event-webapp" target="_blank"><img src="./icons/code.png">
       <p class="custom-title">Code</p></a>
   </div>
   <!-- Code for other links to different projects-->
 </div>
</div>

Here is a screenshot of how the top-bar and the pop-up menu looks!

bb82ba88-4317-46b0-91ec-514499c5cfde.png

  • Adding version deployment info to the bottom

The next task was to add a footer to the page which would contain the version deployment info. The user can click on that link and we can then be taken to the latest version of the code which is currently deployed.

To show the version info, we make use of the Github API. We need to get the hash of the latest commit made on the development branch. We send an API request to the Github requesting for the latest hash and then dynamically add the info and the link received to the footer. The user can then click on that link and will be taken to the latest deployment page of the webapp!

var apiUrl = "https://api.github.com/repos/fossasia/open-event-webapp/git/refs/heads/development";
$.ajax({url: apiUrl, success: function(result){
 var version = result['object']['sha'];
 var versionLink = 'https://github.com/fossasia/open-event-webapp/tree/' + version;
 var deployLink = $('#deploy-link');
 deployLink.attr('href', versionLink);
 deployLink.html(version);
}});

This is how the footer looks after the API Response

44385ad8-e094-490b-8575-a47932aa75c5.png

References:

Updating the UI of the generator form in Open Event Webapp

Twitter Followers Insight App for loklak Apps Site

Twitter Followers Insight, is an app for checking the followers and following lists of an account and as we click on a name, the chain continues. The app also helps to visualize the data, which is returned from the loklak user information API, where it shows the distribution of followers and following across the world in the form of pie chart.

Related issue: https://github.com/fossasia/apps.loklak.org/pull/291

Developing the App

In the initial stage of the app, the main challenge faced was to implement the clickable feature i.e., make the name of the users in the list which gets displayed should be clickable which navigates to the next list to display as the query changes. Well this was tricky but easy to solve as I had to take the Angular JS with input parameter.

Script for storing and displaying the data:

The script below shows how to details are being fetched from the JSON object which is returned by the loklak Userdata API. The data or details is then being stored into an list/array which is a scope variable. The array is then iterated in a particular fashion how I want to get it displayed.

Storing the data:

for (var i = 0; i < followers.length; i++) {
    user = followers[i].screen_name;
    name = followers[i].name;
    followers_count = followers[i].followers_count;
    pic = followers[i].profile_image_url;
    followers_loc.push(followers[i].location_country);
    followerslist.push([user, pic, name, followers_count]);
}

 

The below script shows how the array i.e., showed in the above code, being used and iterated over. Here in this script I used a “ng-repeat” angular function where the list/array is iterated till the limit. The script also display in which order the data is getting displayed on the screen. The clickable feature is set in the “ng-click” angular function, where we are calling the Search function with query as the input parameter.

Displaying the data with clickable feature:

<ul class="gallery-container" >
    <li class="gallery-item" style="list-style-type: none;" ng-repeat="value in followersStatus | limitTo: limitFollowers">
        <a href ng-click="Search(value[0])">
            
class="item-image"> src="{{ value[1] }}" style="height: 94px;width: 94px" />
class="item-desc">
class="item-name"> {{ value[2] }}
class="item-handle"> @{{ value[0] }}
class="item-followers"> class="item-label">Followers: class="item-content">{{ value[3] }}
</div> </a> </li> </ul>

 

Visualizing Followers and Following data using Pie Chart

In this app, the data of user’s followers and following is visualized on the basis of the location they live in. This data is visualized in the form of pie chart using Highcharts.

Script for displaying pie chart:

             $('.pie-chart').highcharts({
                chart: {
                    plotBackgroundColor: null,
                    plotBorderWidth: null,
                    plotShadow: false,
                    type: 'pie'
                },
                title: {
                    text: "Followers"
                },
                tooltip: {
                    pointFormat: '{series.name}: <b>{point.percentage:.1f}%</b>'
                },
                plotOptions: {
                    pie: {
                        allowPointSelect: true,
                        cursor: 'pointer',
                        dataLabels: {
                            enabled: true,
                            format : '',
                            style: {
                                color: (Highcharts.theme && Highcharts.theme.contrastTextColor) || 'black'
                            }
                        }
                    }
                },
                series: [{
                    name: "Followers",
                    colorByPoint: true,
                    data: $scope.locations
                }]
            });

 

Resources

  • Learn more about AngularJS here.
  • Learn more about Highcharts here.
  • Learn more about Loklak API here.
Twitter Followers Insight App for loklak Apps Site

Deploy to Azure Button for loklak

In this blog post, am going to tell you about yet a new deployment method for loklak which is easy and quick with just one click. Deploying to Azure Websites from a Git repository just got a little easier with the Deploy to Azure Button. Simply place the button in README.md with a link to the loklak, and users who click on it will be directed to a streamlined deployment process. If we want to do something more advanced and customize this behavior, then add an ARM template called “azuredeploy.json” at the root of the repository which will cause users to be presented with different inputs and configure your services as specified.

I’m going to walk you through a workflow that I used to test them before checking them in to my repo, as well as describe some of the special behaviors that the “Deploy to Azure” site does

Adding a button

To add a deployment button, insert the following markdown to your README.md file:

[![Deploy to Azure](https://azuredeploy.net/deploybutton.svg)](https://deploy.azure.com/?repository=https://github.com/loklak/loklak_server)

How it works

When a user clicks on the button, a “referrer” header is sent to azuredeploy.net which contains the location of the Git repository of loklak_server to deploy from.

An Example Template

This is a blank template which shows, how the azure divides its inputs.

{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
  },
  "variables": {
  },
  "resources": [
  ],
  "outputs": {
  }
}

By following the above template, in the case of loklak server, the parameters used are name, image i.e., docker image, port , number of CPUs to be utilized and space i.e., memory required.

In the resources section we use container, the type of the container will be

"type": "Microsoft.ContainerInstance/containerGroups",

 

And as output, we expect a public IP address to access the azure cloud instance created by us.

Everything under the root “parameters” property will be inputs into our template. Then these parameter values feed into the resources defined later in the template with the “[parameters(‘paramName’)]” syntax.

Try the “Deploy to Azure” Button here:



Resources

Deploy to Azure Button for loklak

Feeds Moderation in loklak Media Wall

Loklak Media Wall provides client side filters for entities received from loklak status.json API like blocking feeds from a particular user, removing duplicate feeds, hiding a particular feed post for moderating feeds. To implement it, we need pure functions which remove the requested type of feeds and returns a new array of feeds. Moreover, the original set of data must also be stored in an array so that if filters are removed, the requested data is provided to the user

In this blog, I would be explaining how I implemented client side filters to filter out a particular type of feeds and provide the user with a cleaner data as requested.

Types of filters

There are four client-side filters currently provided by Loklak media wall:

    • Profanity Filter: Checks for the feeds that might be offensive and removes it.
    • Remove Duplicate: Removes duplicate feeds and the retweets from the original feeds
    • Hide Feed: Removes a particular feed from the feeds
    • Block User: Blocks a User and removes all the feeds from the particular user

It is also important to ensure that on pagination, new feeds are filtered out based on the previous user requested moderation.

Flow Chart

The flow chart explains how different entities received from the server is filtered and how original set of entities is maintained so that if the user removes the filter, the original filtered entities are recovered.

Working

Profanity Filter

To avoid any obscene language used in the feed status to be shown up on media wall and providing a rather clean data, profanity filter can be used. For this filter, loklak search.json APIs provide a field classifier_profanity which states if there is some swear word is used in the status. We can check for the value of this field and filter out the feed accordingly.

export function profanityFilter(feeds: ApiResponseResult[]): ApiResponseResult[] {
const filteredFeeds: ApiResponseResult[] = [];
feeds.forEach((feed) => {
if ( feed.classifier_language !== null && feed.classifier_profanity !== undefined ) {
if (feed.classifier_profanity !== ‘sex’ &&  feed.classifier_profanity !== ‘swear’) {
filteredFeeds.push(feed);
}
}
else {
filteredFeeds.push(feed);
}
});
return filteredFeeds || feeds;
}

Here, we check if the classifier_profanity field is either not ‘swear’ or ‘sex’ which clearly classifies the feeds and we can push the status accordingly. Moreover, if no classifier_profanity field is provided for a particular field, we can push the feed in the filtered feeds.

Remove Duplicate

Remove duplicate filter removes the tweets that are either retweets or even copy of some feed and return just one original feed. We need to compare field id_str which is the status id of the feed and remove the duplicate feeds. For this filter, we need to create a map and compare feeds on map object and remove the duplicate feeds iteratively and return the array of feeds with unique elements.

export function removeDuplicateCheck(feeds: ApiResponseResult[]): ApiResponseResult[] {
const map = { };
const filteredFeeds: ApiResponseResult[] = [];
const newFeeds: ApiResponseResult[] = feeds;
let v: string;
for (let a = 0; a < feeds.length; a++) {
v = feeds[a].id_str;
if (!map[v]) {
filteredFeeds.push(feeds[a]);
map[v] = true;
}
}
return filteredFeeds;
}

Hide Feed

Hide Feed filter can be used to hide a particular feed from showing up on media wall. It can be a great option for the user to hide some particular feed that user might not want to see. Basically, when the user selects a particular feed, an action is dispatched with payload being the status id i.e. id_str. Now, we pass feeds and status id through a function which returns the particular feed. All the feeds with the same id_str are also removed from the feeds array.

export function hideFeed(feeds: ApiResponseResult[], statusId: string ): ApiResponseResult[] {
const filteredFeeds: ApiResponseResult[] = [];
feeds.forEach((feed) => {
if (feed.id_str !== statusId) {
filteredFeeds.push(feed);
}
});
return filteredFeeds || feeds;
}

User can undo the action and let the filtered feed again show up on media wall. Now, for implementing this, we need to pass original entities, filtered entities and the id_str of the particular entity through a function which checks for the particular entity with the same id_str and add it in the filtered entities and return the new array of filtered entities.

export function showFeed(originalFeeds: ApiResponseResult[], feeds: ApiResponseResult[], statusId: string ): ApiResponseResult[] {
const newFeeds = […feeds];
originalFeeds.forEach((feed) => {
if (feed.id_str === statusId) {
newFeeds.push(feed);
}
});
return newFeeds;
}

Block User

Block User filter can be used blocking feeds from a particular user/account from showing up on media wall. To implement this, we need to check for the User ID field user_id of the user and remove all the feeds from the same User ID. The function accountExclusion takes feeds and user_id (of the accounts) as a parameter and returns an array of filtered feeds removing all the feeds of the requested users/accounts.

export function accountExclusion(feeds: ApiResponseResult[], userId: string[] ): ApiResponseResult[] {
const filteredFeeds: ApiResponseResult[] = [];
let flag: boolean;
feeds.forEach((feed) => {
flag = false;
userId.forEach((user) => {
if (feed.user.user_id === user) {
flag = true;
}
});
if (!flag) {
filteredFeeds.push(feed);
}
});return filteredFeeds || feeds;
}

Key points

It is important to ensure that the new feeds (received on pagination or on a new query) must also be filtered according to the user requested filter. Therefore, before storing feeds in a state and supplying to templates after pagination, it must be ensured that new entities are also filtered out. For this, we need to keep boolean variables as a state property which checks if a particular filter is requested by a user and applies the filter to the new feeds accordingly and store filtered feeds in the filteredFeeds accordingly.

Also, the original feeds must be stored separately so that on removing filters the original feeds are regained. Here, entities stores the original entities received from the server.

case apiAction.ActionTypes.WALL_SEARCH_COMPLETE_SUCCESS: {
const apiResponse = action.payload;
let newFeeds = accountExclusion(apiResponse.statuses, state.blockedUser);
if (state.profanityCheck) {
newFeeds = profanityFilter(newFeeds);
}
if (state.removeDuplicate) {
newFeeds = removeDuplicateCheck(newFeeds);
}return Object.assign({}, state, {
entities: apiResponse.statuses,
filteredEntities: newFeeds,
lastResponseLength: apiResponse.statuses.length
});
}

case wallPaginationAction.ActionTypes.WALL_PAGINATION_COMPLETE_SUCCESS: {
const apiResponse = action.payload;
let newFeeds = accountExclusion(apiResponse.statuses, state.blockedUser);
if (state.profanityCheck) {
newFeeds = profanityFilter(apiResponse.statuses);
}
let filteredEntities = […newFeeds, state.filteredEntities];
if (state.removeDuplicate) {
filteredEntities = removeDuplicateCheck(filteredEntities);
}

return Object.assign({}, state, {
entities: [ apiResponse.statuses, state.entities ],
filteredEntities
});
}

Reference

Feeds Moderation in loklak Media Wall

Reactive Side Effects of Actions in Loklak Search

In a Redux based application, every component of the application is state driven. Redux based applications manage state in a predictable way, using a centralized Store, and Reducers to manipulate various aspects of the state. Each reducer controls a specific part of the state and this allows us to write the code which is testable, and state is shared between the components in a stable way, ie. there are no undesired mutations to the state from any components. This undesired mutation of the shared state is prevented by using a set of predefined functions called reducers which are central to the system and updates the state in a predictable way.

These reducers to update the state require some sort triggers to run. This blog post concentrates on these triggers, and how in turn these triggers get chained to form a Reactive Chaining of events which occur in a predictable way, and how this technique is used in latest application structure of Loklak Search. In any state based asynchronous application, like, Loklak Search the main issue with state management is to handle the asynchronous action streams in a predictable manner and to chain asynchronous events one after the other.  The technique of reactive action chaining solves the problem of dealing with asynchronous data streams in a predictable and manageable manner.

Overview

Actions are the triggers for the reducers, each redux action consists of a type and an optional payload. Type of the action is like its ID which should be purposely unique in the application. Each reducer function takes the current state which it controls and action which is dispatched. The reducer decides whether it needs to react to that action or not. If the user reacts to the action, it modifies the state according to the action payload and returns the modified state, else, it returns the original state. So at the core, the actions are like the triggers in the application, which make one or more reducers to work. This is the basic architecture of any redux application. The actions are the triggers and reducers are the state maintainers and modifiers. The only way to modify the state is via a reducer, and a reducer only runs when a corresponding action is dispatched.

Now, who dispatches these actions? This question is very important. The Actions can be technically dispatched from anywhere in the application, from components, from services, from directives, from pipes etc. But we almost in every situation will always want the action to be dispatched by the component. Component who wishes to modify the state dispatch the corresponding actions.

Reactive Effects

If the components are the one who dispatch the action, which triggers a reducer function which modifies the state, then what are these effects, cause the cycle of events seem pretty much complete. The Effects are the Side Effects, of a particular action. The term “side effect” means these are the piece of code which runs whenever an action is dispatched. Don’t confuse them with the reducer functions, effects are not same as the reducer functions as they are not associated with any state i.e. they don’t modify any state. They are the independent sources of other actions. What this means is whenever an Action is dispatched, and we want to dispatch some other action, maybe immediately or asynchronously, we use these side effects. So in a nutshell, effects are the piece of code which reacts to a particular action, and eventually dispatches some other actions.

The most common use case of effects is to call a corresponding service and fetch the data from the server, and then when the data is loaded, dispatch a SearchCompleteAction. These are the simplest of use cases of effects and are most commonly use in Loklak Search. This piece of code below shows how it is done.

@Effect()
search$: Observable<Action>
= this.actions$
.ofType(apiAction.ActionTypes.SEARCH)
.map((action: apiAction.SearchAction) => action.payload)
.switchMap(query => {
return this.apiSearchService.fetchQuery(query)
.map(response => new apiAction.SearchCompleteSuccessAction(response))

This is a very simple type of an effect, it filters out all the actions and react to only the type of action which we are interested in, here SEARCH, and then after calling the respective Service, it either dispatches SearchComplete or a SearchFail action depending on the status of the response from the API. The effect runs on SEARCH action and eventually dispatches, the success or the fail actions.

This scheme illustrates the effect as another point apart from components, to dispatch some action. The difference being, components dispatch action on the basis of the User inputs and events, whereas Effects dispatch actions on the basis of other actions.

Reactive Chaining of Actions

We can thus take the advantage of this approach in a form of reactive chaining of actions. This reactive chaining of actions means that some component dispatches some action, which as a side effects, dispatches some other action, and it dispatches another set of actions and so on. This means a single action dispatched from a component, brings about the series of actions which follow one another. This approach makes it possible to write reducers at the granular level rather than complete state level. As a series of actions can be set up which, start from a fine grain, and reaches out to a coarse grain. The loklak search application uses this technique to update the state of query. The reducers in the loklak search rather than updating the whole query structure update only the required part of the state. This helps in code maintainability as the one type of query attribute has no effect on the other type

@Effect()
inputChange$: Observable<Action>
= this.actions$
.ofType(queryAction.ActionTypes.VALUE_CHANGE)
.map(_ => new queryAction.QueryChangeAction(''));

@Effect()
filterChange$: Observable<Action>
= this.actions$
.ofType(queryAction.ActionTypes.FILTER_CHANGE)
.map(_ => new queryAction.QueryChangeAction(''));

Here the QUERY_CHANGE action further can do other processing of the query and then dispatch the SearchAction, which eventually calls the service and then return the response, then the success or fail actions can be dispatched eventually.

Conclusion

The reactive side effects is one of the most beautiful thing we can do with Redux and Reactive Programming. They provide an easy clean way to chain events in an application, which helps in a cleaner non-overlapping state management along with clean and simple reducers. This idea of the reactive chaining can be extended to any level of sophistication, and that too in a simple and easy to understand manner.

Resources and links

Reactive Side Effects of Actions in Loklak Search

Enabling Google App Signing for Android Project

Signing key management of Android Apps is a hectic procedure and can grow out of hand rather quickly for large organizations with several independent projects. We, at FOSSASIA also had to face similar difficulties in management of individual keys by project maintainers and wanted to gather all these Android Projects under singular key management platform:

To handle the complexities and security aspect of the process, this year Google announced App Signing optional program where Google takes your existing key’s encrypted file and stores it on their servers and asks you to create a new upload key which will be used to sign further updates of the app. It takes the certificates of your new upload key and maps it to the managed private key. Now, whenever there is a new upload of the app, it’s signing certificate is matched with the upload key certificate and after verification, the app is signed by the original private key on the server itself and delivered to the user. The advantage comes where you lose your key, its password or it is compromised. Before App Signing program, if your key got lost, you had to launch your app under a new package name, losing your existing user base. With Google managing your key, if you lose your upload key, then the account owner can request Google to reassign a new upload key as the private key is secure on their servers.

There is no difference in the delivered app from the previous one as it is still finally signed by the original private key as it was before, except that Google also optimizes the app by splitting it into multiple APKs according to hardware, demographic and other factors, resulting in a much smaller app! This blog will take you through the steps in how to enable the program for existing and new apps. A bit of a warning though, for security reasons, opting in the program is permanent and once you do it, it is not possible to back out, so think it through before committing.

For existing apps:

First you need to go to the particular app’s detail section and then into Release Management > App Releases. There you would see the Get Started button for App Signing.

The account owner must first agree to its terms and conditions and once it’s done, a page like this will be presented with information about app signing infrastructure at top.

So, as per the instructions, download the PEPK jar file to encrypt your private key. For this process, you need to have your existing private key and its alias and password. It is fine if you don’t know the key password but store password is needed to generate the encrypted file. Then execute this command in the terminal as written in Step 2 of your Play console:

java -jar pepk.jar –keystore={{keystore_path}} –alias={{alias}} –output={{encrypted_file_output_path}} –encryptionkey=eb10fe8f7c7c9df715022017b00c6471f8ba8170b13049a11e6c09ffe3056a104a3bbe4ac5a955f4ba4fe93fc8cef27558a3eb9d2a529a2092761fb833b656cd48b9de6a

You will have to change the bold text inside curly braces to the correct keystore path, alias and the output file path you want respectively.

Note: The encryption key has been same for me for 3 different Play Store accounts, but might be different for you. So please confirm in Play console first

When you execute the command, it will ask you for the keystore password, and once you enter it, the encrypted file will be generated on the path you specified. You can upload it using the button on console.

After this, you’ll need to generate a new upload key. You can do this using several methods listed here, but for demonstration we’ll be using command line to do so:

keytool -genkey -v -keystore {{keystore_path}} -alias {{alias_name}} -keyalg RSA -keysize 2048 -validity 10000

The command will ask you a couple of questions related to the passwords and signing information and then the key will be generated. This will be your public key and be used for further signing of your apps. So keep it and the password secure and handy (even if it is expendable now).

After this step, you need to create a PEM upload certificate for this key, and in order to do so, execute this command:

keytool -export -rfc -keystore {{keystore_path}} -alias {{alias_name}} -file {{upload_certificate.pem}}

After this is executed, it’ll ask you the keystore password, and once you enter it, the PEM file will be generated and you will have to upload it to the Play console.

If everything goes right, your Play console will look something like this:

 

Click enrol and you’re done! Now you can go to App Signing section of the Release Management console and see your app signing and new upload key certificates

 

You can use the SHA1 hash to confirm the keys as to which one corresponds to private and upload if ever in confusion.

For new apps:

For new apps, the process is like a walk in park. You just need to enable the App Signing, and you’ll get an option to continue, opt-out or re-use existing key.

 

If you re-use existing key, the process is finished then and there and an existing key is deployed as the upload key for this app. But if you choose to Continue, then App Signing will be enabled and Google will use an arbitrary key as private key for the app and the first app you upload will get its key registered as the upload key

 

This is the screenshot of the App Signing console when there is no first app uploaded and you can see that it still has an app signing certificate of a key which you did not upload or have access to.

If you want to know more about app signing program, check out these links:

Enabling Google App Signing for Android Project

Introducing Stream Servlet in loklak Server

A major part of my GSoC proposal was adding stream API to loklak server. In a previous blog post, I discussed the addition of Mosquitto as a message broker for MQTT streaming. After testing this service for a few days and some minor improvements, I was in a position to expose the stream to outside users using a simple API.

In this blog post, I will be discussing the addition of /api/stream.json endpoint to loklak server.

HTTP Server-Sent Events

Server-sent events (SSE) is a technology where a browser receives automatic updates from a server via HTTP connection. The Server-Sent Events EventSource API is standardized as part of HTML5 by the W3C.

Wikipedia

This API is supported by all major browsers except Microsoft Edge. For loklak, the plan was to use this event system to send messages, as they arrive, to the connected users. Apart from browser support, EventSource API can also be used with many other technologies too.

Jetty Eventsource Plugin

For Java, we can use Jetty’s EventSource plugin to send events to clients. It is similar to other Jetty servlets when it comes to processing the arguments, handling requests, etc. But it provides a simple interface to send events as they occur to connected users.

Adding Dependency

To use this plugin, we can add the following line to Gradle dependencies –

compile group: 'org.eclipse.jetty', name: 'jetty-eventsource-servlet', version: '1.0.0'

[SOURCE]

The Event Source

An EventSource is the object which is required for EventSourceServlet to send events. All the logics for emitting events needs to be defined in the related class. To link a servlet with an EventSource, we need to override the newEventSource method –

public class StreamServlet extends EventSourceServlet {
    @Override
    protected EventSource newEventSource(HttpServletRequest request) {
        String channel = request.getParameter("channel");
        if (channel == null) {
            return null;
        }
        if (channel.isEmpty()) {
            return null;
        }
        return new MqttEventSource(channel);
    }
}

[SOURCE]

If no channel is provided, the EventSource object will be null and the request will be rejected. Here, the MqttEventSource would be used to handle the stream of Tweets as they arrive from the Mosquitto message broker.

Cross Site Requests

Since the requests to this endpoint can’t be of JSONP type, it is necessary to allow cross site requests on this endpoint. This can be done by overriding the doGet method of the servlet –

@Override
protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
     response.setHeader("Access-Control-Allow-Origin", "*");
    super.doGet(request, response);
}

[SOURCE]

Adding MQTT Subscriber

When a request for events arrives, the constructor to MqttEventSource is called. At this stage, we need to connect to the stream from Mosquitto for the channel. To achieve this, we can set the class as MqttCallback using appropriate client configurations –

public class MqttEventSource implements MqttCallback {
    ...
    MqttEventSource(String channel) {
        this.channel = channel;
    }
    ...
    this.mqttClient = new MqttClient(address, "loklak_server_subscriber");
    this.mqttClient.connect();
    this.mqttClient.setCallback(this);
    this.mqttClient.subscribe(this.channel);
    ...
}

[SOURCE]

By setting the callback to this, we can override the messageArrived method to handle the arrival of a new message on the channel. Just to mention, the client library used here is Eclipse Paho.

Connecting MQTT Stream to SSE Stream

Now that we have subscribed to the channel we wish to send events from, we can use the Emitter to send events from our EventSource by implementing it –

public class MqttEventSource implements EventSource, MqttCallback {
    private Emitter emitter;


    @Override
    public void onOpen(Emitter emitter) throws IOException {
        this.emitter = emitter;
        ...
    }

    @Override
    public void messageArrived(String topic, MqttMessage message) throws Exception {
        this.emitter.data(message.toString());
    }
}

[SOURCE]

Closing Stream on Disconnecting from User

When a client disconnects from the stream, it doesn’t makes sense to stay connected to the server. We can use the onClose method to disconnect the subscriber from the MQTT broker –

@Override
public void onClose() {
    try {
        this.mqttClient.close();
        this.mqttClient.disconnect();
    } catch (MqttException e) {
        // Log some warning 
    }
}

[SOURCE]

Conclusion

In this blog post, I discussed connecting the MQTT stream to SSE stream using Jetty’s EventSource plugin. Once in place, this event system would save us from making too many requests to collect and visualize data. The possibilities of applications of such feature are huge.

This feature can be seen in action at the World Mood Tracker app.

The changes were introduced in pull request loklak/loklak_server#1474 by @singhpratyush (me).

Resources

Introducing Stream Servlet in loklak Server