Under the Hood: Installation wizard

Not yet merged but hopefully soon to be is the new installation wizard.

It is implemented as a web-app and the corresponding API, but not started with the rest of the server but with it’s own start script, installation.sh

But also if you start loklak via start.sh, you will be asked if you want to start the installation wizard. Don’t worry, if you don’t do anything, the prompt will just time out and loklak will start as usually. You can do the installation at any time later.


So here are the steps that the wizard currently offers:

1. Admin account creation


2. User registration settings (currently only whether and with what confirmation users can register)

3. General settings

  • the host url (which is read from the browser url and therefor makes it easy to set the actual url, even on one-click deployments)
  • the peer name, to make installations identifiable more easily
  • back-end settings


4. SMTP-Settings: configure loklak to send mails via an existing email account (with test option). You will probably wonder why there are alot of options. Well, we have

  • the server to connect to (not necessarily equal to the email address)
  • the email address to put into the header (can be any one that the server permits. for example if have an alias registered on the server, you can set it here)
  • the name to display with the email
  • the actual login name (often the email-address, but not necessarily!)
  • the password
  • the port (usually dependent on the encryption mode, but again not necessary)
  • the encryption mode (today usually startTLS or TLS)
  • disable certificate checking if you’re using a server with self signed certificate or are behind a ssl-proxy

In a future version, we can maybe include a mechanism to automatically try to fill out some fields from other values, like many mail clients do, but that’s quite some work and usually involves a database with host specific values.

5. HTTPS-Settings: activate HTTPS for Loklak (if you don’t use a HTTP-Proxy as loklak.org and the one-click-deployments do) and configure whether Loklak should check certificates from other hosts it connects to. This can be useful if you are behind an SSL-Proxy.


6. A summary of all values


After submitting (and if no error is thrown by the server), the installation wizard wil shut down


If you started the installation with the normal start script, it will continue to start loklak as usual


Beware: sometimes the browser does not really refresh and will continue showing parts of the installation page instead of the normal Loklak page. If that happens, just clean your cache.

Under the Hood: Installation wizard

Under the hood: HTTPS in Loklak

For some time now, loklak natively offers HTTPS support. On most one-click-deployments, that is not really necessary, as there’s usually a HTTP-proxy in front, which forwards all the trafic. This HTTP-proxies can then use their own HTTPS implementation. Also, Loklak is usually run with normal user previleges, which means it can’t open a socket on the normal HTTP and HTTPS ports, but only on ports greater than 1024.

Still, in some setups it might be desireable to not have an extra HTTP-proxy installed but still benefit from a secure connection. Espeacially for user-login and similar things.

These are the current options that can be set in conf/config.properties or data/settings/customized_config.properties:




The first setting has four option:

  1. off: the default. Only HTTP
  2. on: HTTP and HTTPS
  3. redirect: redirect all HTTP requests to HTTPS
  4. only: only HTTPS

The second lets us choose where to get our keys from. In java, the usual way is to use a key-store. That’s a file, protected by a password (the next two options). Buf if the keysource is set to “key-cert”, we can also use PEM-formated keys and certs, which is generally more common for non-java applications (like apache, nginx etc.).

If choosen, we specify .pem files (the last two options). If a whole certificate chain is required, all the certificates have to be in one file, just copied together.

Loklak will create a keystore from the .pem files using the bouncycastle library. It will not write it to disk. Here’s the code for that:

//generate random password
char[] chars = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789".toCharArray();
StringBuilder sb = new StringBuilder();
Random random = new Random();
for (int i = 0; i < 20; i++) {
    char c = chars[random.nextInt(chars.length)];
String password = keystoreManagerPass = sb.toString();

//get key and cert
File keyFile = new File(DAO.getConfig("https.key", ""));
if(!keyFile.exists() || !keyFile.isFile() || !keyFile.canRead()){
   throw new Exception("Could not find key file");
File certFile = new File(DAO.getConfig("https.cert", ""));
if(!certFile.exists() || !certFile.isFile() || !certFile.canRead()){
   throw new Exception("Could not find cert file");

Security.addProvider(new org.bouncycastle.jce.provider.BouncyCastleProvider());

byte[] keyBytes = Files.readAllBytes(keyFile.toPath());
byte[] certBytes = Files.readAllBytes(certFile.toPath());

PEMParser parser = new PEMParser(new InputStreamReader(new ByteArrayInputStream(certBytes)));
X509Certificate cert = new JcaX509CertificateConverter().setProvider("BC").getCertificate((X509CertificateHolder) parser.readObject());

parser = new PEMParser(new InputStreamReader(new ByteArrayInputStream(keyBytes)));
PrivateKey key = new JcaPEMKeyConverter().setProvider("BC").getPrivateKey((PrivateKeyInfo) parser.readObject());

keyStore = KeyStore.getInstance(KeyStore.getDefaultType());
keyStore.load(null, null);

keyStore.setCertificateEntry(cert.getSubjectX500Principal().getName(), cert);
keyStore.setKeyEntry("defaultKey",key, password.toCharArray(), new Certificate[] {cert})


A last interesting option is:


Loklak is by default configured to trust any HTTPS-connection, even if the certificate is wrong. That was done so people behind HTTPS-proxies can still use Loklak.

But it is also possible to make Loklak honor certificates. If “none” is selected, it will behave like most applications: if the certificate is wrong, close the connection. But even then, it’s possible to import certificates system-wide. Loklak will then accept those connections.

It’s also possible to make Loklak work with peers with broken/self-signed certificates (so the connection is atleast not plain text) but still require good certificates from other sources (for example twitter). That’s the “peer” option.

Creating a HttpConnection in java that does not check the certificates is actually much more tricky than creating a save on. Here’s the code to create a connection manager cm that ignores certificates if you need one at some point:

boolean trustAllCerts = ...;
   Registry<ConnectionSocketFactory> socketFactoryRegistry = null;
   try {
      SSLConnectionSocketFactory trustSelfSignedSocketFactory = new SSLConnectionSocketFactory(
             new SSLContextBuilder().loadTrustMaterial(null, new TrustSelfSignedStrategy()).build(),
               new TrustAllHostNameVerifier());
   socketFactoryRegistry = RegistryBuilder
               .<ConnectionSocketFactory> create()
               .register("http", new PlainConnectionSocketFactory())
               .register("https", trustSelfSignedSocketFactory)
} catch (KeyManagementException | NoSuchAlgorithmException | KeyStoreException e) {
   PoolingHttpClientConnectionManager cm = (trustAllCerts && socketFactoryRegistry != null) ? 
          new PoolingHttpClientConnectionManager(socketFactoryRegistry):
          new PoolingHttpClientConnectionManager();
Under the hood: HTTPS in Loklak

Under the hood: Accounting example

The login-api is now the first service in loklak that utilizes the accounting feature. It does that to protect user accounts against brute-force login attempts.

How does it work?

Quite simple. First, we define some permissions:

public JSONObject getDefaultPermissions(BaseUserRole baseUserRole) {
   JSONObject result = new JSONObject();
   result.put("maxInvalidLogins", 10);
   result.put("blockTimeSeconds", 120);
   result.put("periodSeconds", 60);
   result.put("blockedUntil", 0);
   return result;

Each user is only allowed to make 10 invalid login attempts over a period of 60 seconds and will otherwise get blocked for 120 seconds. Why do we save that in the permissions? Because we could change them on user basis. If one user get’s blocked for the 3rd time, we could set his blocked time up to 24h for example. That’s not implemented yet though.

Now, whenever we have a bad login attempt we save it in the accounting system:

authorization.getAccounting().addRequest(this.getClass().getCanonicalName(), "invalid login");

throw new APIException(422, "Invalid credentials");

Note that we have to specify some path or name in the accounting object. We use the full name of the login service, so other service will mess with that.

Now this is how we check:

private void checkInvalidLogins(Query post, Authorization authorization, JSONObjectWithDefault permissions) throws APIException {

   // is already blocked?
   long blockedUntil = permissions.getLong("blockedUntil");
   if(blockedUntil != 0) {
      if (blockedUntil > Instant.now().getEpochSecond()) {
         Log.getLog().info("Blocked ip " + post.getClientHost() + " because of too many invalid login attempts.");
         throw new APIException(403, "Too many invalid login attempts. Try again in "
               + (blockedUntil - Instant.now().getEpochSecond()) + " seconds");
         authorization.setPermission(this, "blockedUntil", 0);

   // check if too many invalid login attempts were made already
   JSONObject invalidLogins = authorization.getAccounting().getRequests(this.getClass().getCanonicalName());
   long period = permissions.getLong("periodSeconds", 600) * 1000; // get time period in which wrong logins are counted (e.g. the last 10 minutes)
   int counter = 0;
   for(String key : invalidLogins.keySet()){
      if(Long.parseLong(key, 10) > System.currentTimeMillis() - period) counter++;
   if(counter > permissions.getInt("maxInvalidLogins", 10)){
      authorization.setPermission(this, "blockedUntil", Instant.now().getEpochSecond() + permissions.getInt("blockTimeSeconds", 120));
      throw new APIException(403, "Too many invalid login attempts. Try again in "
            + permissions.getInt("blockTimeSeconds", 120) + " seconds");

First we check if there’s an client specific override of our permissions: blockedUntil

Normally that is 0, but if it’s set to some second in the future, we respond with an error message, saying how long the client has to wait.

Otherwise, we check how many entries are in the accounting object for service. As the login service only saves bad requests, we only need to know the number and when they were made.

Each request in the accounting object is stored with the current timestamp as key. So we check which key were made in the last 60 seconds (as defined in the permissions). If it’s more than 10, set the permission blockedUntil to the current second + 120.

This is a short example of a mighty tool to archive user specific reactions in our services and could be adopted for many different scenarios. Feel free to try 🙂

Under the hood: Accounting example

Under the hood: Public key login

This blog-post is about a new login method. It’s based on asymetric encryption and will help us to make user-authentication easier in some circumstances.

So what’s the idea? Most people reading this will know about the feature on github, where one can register an ssh-key. That makes working with git more easy, as the user can then automatically login with git, without typing in his password.

It’s even better. The user only needs one key-pair and can use it on many different servers, services etc. without needing to worry that his password gets stolen. Even if github gets hacked or some evil administrator wants to read the password, he doesn’t have a chance. The server only knows the public key.

The user has a key-pair, consisting of a public and a private key. The public key can really be public, everyone can now about. The private key should, under all circumstances, kept private.

So what usecase does this have for loklak? One thing is IoT. We can create a key-pair for a device once and easily give it access to a loklak server without needing to create a new password and store it on the device. All we need is the public key of the device, that we then register on as many loklak servers as we want.

Another idea would be a trust system between loklak instances. If we collaborate with different loklak instances from people we don’t know, key-pairs are a good way to authenticate and remember each other.

How to use?

There’s a new app key registration app where you can either upload an existing public key or let the server create a new key pair for you. The public key is stored on the server while the private key should be saved by the client. The app will also create a key hash, basicly the sha265 hash of the public key in DER format.

The app will display the keys in two different formats:

  • DER+BASE64 (a format that can easily used in java projects)
  • PEM (a format that can be used with openssl

Either way it’s the same key. The idea now is that the client says he wants to login with a certain email and a specific key (he only sends over the key-hash, to save traffic). The server replies with a challenge (just a random string). The client has now to create a valid signature of the challenge with his private key (and send it back to the server). The server verifies the signature and logs the user in (actually he creates an access-token for api-access, as browser logins will usually use a password).

An example:

Client calls:

https://loklak.org/api/login.json?login[email protected]&keyhash=4GV3dPkeQm0XPksr69DKy0LD7ecBOYVX1YN6xTstp9Y=

Server replies:

  "session": {"identity": {
    "type": "host",
    "name": "",
    "anonymous": true
  "challenge": "dmP4jv9oG7Jor3941BeiQRFzOOUsBA",
  "sessionID": "lAclhJxljT3vnMuFzjDpm6rxzdLd1S",
  "message": "Found valid key for this user. Sign the challenge with you public key and send it back, together with the sessionID"

The client calculates the signature:

String challenge = "dmP4jv9oG7Jor3941BeiQRFzOOUsBA";
Signature sig = Signature.getInstance("SHA256withRSA");
result = new String(Base64.getEncoder().encode(sig.sign()));

And sends the answer back:


The server will then response with an access-token.

With openssl, the process should be fundamentally similar:

openssl dgst -sha256 -sign privkey.pem -out response.txt challenge.txt

and then some BASE64 encoding on response.txt.

Currently, only RSA keys (in the sizes 1024, 2048 and 4096) can be registered or created, but in the future we may support eliptic-curve signatures, which would reduce key- and signature sizes and also the needed cpu-time.

Under the hood: Public key login

Under the hood: Javadocs

This blog-post is about a minor topic as my main work from this week didn’t land in the master tree yet.

How to enable automatic javadoc creation using Travis-ci

I wanted to have a way to have the javadocs of loklak automatically build and published. That’s what many other java projects do and it often helps me alot to find docs through a search engine on the web. In the end I found a nice discription to do it.

As you probably know, we use an automatic build system for loklak. It’s called travis-ci and is directly offered by github.

If you want to use it for your own github repo, just go to the repo settings, select “Webhooks & services” and click “add service”. You will find travis-ci in the list there.

The build can be triggered by different events, most notably on pull requests, so before the code is merged into the target branch. This helps us alot to make sure a PR does not break anything.

The second, in our project barely noticed one, is the one afterwards, which is triggered everytime a commit is pushed to the master branch. Until recently, we didn’t really make use of that, as we only wanted to make sure the build works. The result itself was not used afterwards.

Another project of github is github.io. When activated for a repo (you can do it from Settings->Options->Github Pages), it creates a ‘gh-pages’ branch in you repo that contains html content, which is then displayed on github.io. So what we want to do is build the javadocs and push them to that specific branch.

In .utility/push-javadoc-to-gh-pages.sh we now have a script with the following content:


if [ “$TRAVIS_REPO_SLUG” = “loklak/loklak_server” ] && [ “$TRAVIS_JDK_VERSION” = “oraclejdk8” ] && [ “$TRAVIS_PULL_REQUEST” = “false” ] && [ “$TRAVIS_BRANCH” = “master” ]; then

echo -e “Creating javadoc…n”

ant javadoc

echo -e “Publishing javadoc…n”

cp -R html/javadoc $HOME/javadoc-latest

cd $HOME
git config –global user.email “[email protected]
git config –global user.name “travis-ci”
git clone –quiet –branch=gh-pages https://${GH_TOKEN}@github.com/loklak/loklak_server gh-pages > /dev/null

cd gh-pages
git rm -rf ./*
cp -Rf $HOME/javadoc-latest/* ./
git add -f .
git commit -m “Latest javadoc on successful travis build $TRAVIS_BUILD_NUMBER auto-pushed to gh-pages”
git push -fq origin gh-pages > /dev/null 2>&1

if [ $? -eq 0 ]; then
echo -e “Published Javadoc to gh-pages.n”
exit 0
echo -e “Publishing failed. Maybe the access-token was invalid or had insufficient permissions.n”
exit 1


When run, it checks if it’s in the loklak repo, is not a pull request and the build process happens on the master branch. It then calls ant to build the javadocs (note the build target javadoc in the ant build file).

It then removes the old content of the gh-pages branch and adds the new one, commits and pushes it. The interesting point here is the cloning part. Note the variable ${GH_TOKEN}.

Here’s the actuall magic in the story. We have a public visible script that somehow contains an access-token to in this case my github account. How can this be save?

Let’s have a look at our travis.yml file:

language: java

– oraclejdk8

  – secure: “DbveaxDMtEP+/Er6ktKCP+P42uDU8xXWRBlVGaqVNU3muaRmmZtj8ngAARxfzY0f9amlJlCavqkEIAumQl9BYKPWIra28ylsLNbzAoCIi8alf9WLgddKwVWsTcZo9+UYocuY6UivJVkofycfFJ1blw/83dWMG0/TiW6s/SrwoDw=”

– ./gradle_init.sh
– gradle assemble
– ./gradle_clean.sh
– ant
– bin/start.sh

install: true

– “master”

– sh .utility/push-javadoc-to-gh-pages.sh

We see that the script from above is called if the process was successfull. So what’s that “secure” part?

Travis offers a very interesting way to let us put code into their build file that can’t be seen by outsider. It works with public/private key encryption. Travis creates a key-pair for each repo.

With their ruby-tool (also called travis), we can easily encrypt data with the public key by calling

travis encrypt some-data

from the corresponding git-folder and adding the output to the build-file. So when Travis reads the build-file and finds the secure-block, it uses the private key (that only travis knows) to decrypt the data. As the key pair is only used for this repo, it only works when called from there.

So we have a way to give Travis data only it can use. The actual encryption command looked somhow like:

travis encrypt GH_TOKEN=323pthaid123ntahoeudi

We saved some access-token in the variable GH_TOKEN, that we then use later in the script.

That’s it, you can now always see the latest javadocs at http://loklak.github.io/loklak_server/

Under the hood: Javadocs

Under the hood: Authorization

This post is about the current status of the authorization system and will focus on the parts that are nessecary to understand to write API-servlets.


In order to understand the following concept, we first have to clearify how we manage permissions. A permission by itself is a key-value pair, declared on servlet-level. Think something like this:

allowed-to-download : true

max-downloads : 10

We currently have three levels on which permissions get set:


We have a small set of hardcoded roles, we call them base-user-roles, which get permissions assigned directly in the java code.

These permissions can be seen as default permissions. They can be overritten for user-roles or individual users.

We currently have the following base-user-roles:

ANONYMOUS, // the lowest user-roles, usually for not logged-in users
USER, // normal user-roles
PRIVILEGED, // user-roles with special privileges like moderators
ADMIN // admin-roles


Not hardcoded but fully configurable, we have user-roles. Each user-role has a parent from which it inherits permissions. The parent can either be a base-user-role or another user-role. In the end, there must always be a base-user-role at the root of the tree.

Each user-role can override permissions inherited from it’s parent, making them strongly configurable.

By default, the user-roles are directly derived from the base-user-roles and do not have any overrides.


At the end, there’re the individual users. Each user has a user-role, from which she inherits permissions. But of course, she can again have individual overrides.



In order to use the AAA-system (Authentication, Authorization, Accounting), a servlet has to extend the AbstractAPIHandler class. It therefor has to override four methods:

public BaseUserRole getMinimalBaseUserRole()
public JSONObject getDefaultPermissions(BaseUserRole baseUserRole)
public String getAPIPath()
public JSONObject serviceImpl(Query post, 
Authorization rights, final JSONObjectWithDefault permissions)

The third  is just about the path of the servlet. The fourths is the actual implementation of the servlet, we will come back to that shortly.

The first two are exclusevly for the authorization system. Here’s some example implementation:

public BaseUserRole getMinimalBaseUserRole() {
   return BaseUserRole.PRIVILEGED;

This means, that the user-role of the user accessing the servlet must be derived atleast from the PRIVILEGED-base-user-role. Of course, ADMIN would also be ok. All other users get a 401-HTTP-Error. This method is intended to make it very explicit and hard to confuse if a servlet should be limited to, for example, admins only. For more sophisticated permission management, we use the second method:

public JSONObject getDefaultPermissions(BaseUserRole baseUserRole){
   JSONObject result = new JSONObject();

      case ADMIN:
         result.put("list_users", true);
         result.put("list_users-roles", true);
         result.put("edit-all", true);
         result.put("edit-less-privileged", true);
      case PRIVILEGED:
         result.put("list_users", true);
         result.put("list_users-roles", true);
         result.put("edit-all", false);
         result.put("edit-less-privileged", true);
         result.put("list_users", false);
         result.put("list_users-roles", false);
         result.put("edit-all", false);
         result.put("edit-less-privileged", false);
   return result;

Here we define default permissions based on the base-user-role. The example is from the user-management servlet, so we only allow administrators and special priviledged users anyway. The default values would not be used in this example.

In this method we should declare all keys we want to use in the servlet. By default, we allow privileged users to edit the profiles of users with the base-user-role USER or ANONYMOUS.

We maybe want a user-role that is able to list the users, but is not able to edit any user. We could archive that by overriding ‘edit-less-privileged’ to ‘false’. This will be possible via a the user-management servlet, for which we’ll have a graphical app.

So how do we use it? Here’s is how the ‘serviceImpl` could look like:

public JSONObject serviceImpl(Query post, Authorization authorization,
final JSONObjectWithDefault permissions) throws APIException {

   JSONObject result = new JSONObject();

   switch (post.get("show","")){
      case "user-list":
         if(permissions.getBoolean("list_users", false)){
            result.put("user-list", DAO.authorization.getPersistent());
         } else throw new APIException(403, "Forbidden");
      default: throw new APIException(400, "No 'show' parameter specified");

   return result;

The ‘permission’ object contains all the values for this servlet, from the base-user-roles, the overrides from the user-role(s) and the overrides for the specific user.

It’s actually a ‘JSONObject’, but given as ‘JSONObjectWithDefault’, which just extends the ‘get’ methods of ‘JSONObject’ with default values. This is just to avoid security issues because of errors.

We can also get the permissions for this or other servlets from the Authorization servlet, by calling ‘authorization.getPermissions(this);’ or ‘authorization.getPermissions(new SignUpService());’ We actually have to initialize a object for it, as Java does not support methods that are static and abstract at the same time.

I hope that gives you an idea how to currently use authorization in servlets 🙂

Note: from here on, Michael will take over the further development of the AAA-system, so some things might change in the future.

Under the hood: Authorization

Under the hood: Authentication (login)

In the second post of ‘Under the hood’, I’ll explain how our current login system works and how you can use it. This is mainly interesting if you want to know how to get API-access to recourses which need a login in the future.


First a bit about the overall architecture.

Loklak uses an embedded Jetty HTTP server. It works fundamentaly different than for example, an Apache server with PHP.

Like in Apache, you can have a folder with static content, like hmtl, css and js files, which just get displayed. But for dynamic output, you don’t have just java-class-files in the same folder, which get interpretated on the fly like php-files do.

Instead we register so called servlets on the server before we start it. This servlets are java classes which get a http-request as input and send a response at the end. All classes in the api-packages are such servlets.

So the idea is to create a so called AAA-system:

  • Authentication (who are you?)
  • Authorization (what are you allowed to do?)
  • Accounting (what/how much did you do already?)

As of now, only the authentication part is ready to use, authorization will follow soon (hopefully early next week)

To make it easy for the servlets have access to that system, Michael created an abstract class called AbstactAPIHandler. This class is meant to contain all common code the servlets need, which in turn extend this class to implement the actual functionality.

So what does it offer so far? Well, as of the authentication, there’s alot of different use cases how we want to be able to authenticate a user:

  1. Login via a user id and a password
  2. For browsers: sessions and cookies, so the user stays logged in over multiple requests
  3. For api-access and links: access tokens
  4. Login via a key-pair (not yet implemented)

1., to 3. are implemented and ready to use. Here’s how:

User id and password

Just add

[email protected]&password=123456

to a request to any sevlet extending AbstractAPIHandler. This will log you in as the user [email protected]

As we want to use this for browser logins and logins via other tools (for example curl), this does not create a session.  A session means the server remembers you over multiple calls. We don’t want that for pure API calls, as APIs are better stateless, that means all calls are independent from each other.

Sessions and Cookies

If you use a browser, you probably want to do the login only once. Afterwards, your browser and the server should remember each other.

There are two different ways to archive that:

  • Sessions, which are short-lived and forgotten as soon as either the browser or the server shuts down
  • Cookies, which can live for a longer time and survice restarts (in the moment, server restarts also remove them, but this will change in the future)

Technically, these are very similar, but a bit different in how we have to use them.

Anyhow, you can request a session or a cookie with:




respectively. Cookie currently expire after one week if not used.


In some use-cases you might want to log in without appending you password each time. For example you want to create link which logs a user in (this is done with the email validation link).

Or you want stateles API-access via a public-private key login (which is somewhat slow).

Then it’s useful to create a access-token. Currently you can do so by calling handshake-client.json. An example:

domain.com/api/[email protected]&password=123456&valid_seconds=3600

This returns a access-token for the user [email protected] with a livetime of one hour. You can then use that token to make further queries:



That’s it for the login, but I’ll add more as soon as public-private-key login is ready.

Under the hood: Authentication (login)

Under the hood: logging and start-script

This is the first post in a series I’ll call ‘Under the hood’, explaining work that happened rather in the inside of Loklak and probably got rarely noticed.

It’ll be mostly about how we solved issues, so people can learn from it when they face similar tasks, and what configuration options people should know about.

I’ll start with two rather light toppics: the new logging infrastructure and the changes to the start-script.


Until recently, all logging in Loklak happened by simply letting loklak write to stdout, which is normally the console output. It then got piped to a text file by the start-script.

cmdline=$cmdline -server -classpath $CLASSPATH org.loklak.LoklakServer >> data/loklak.log 2>&1 & echo $! > data/loklak.pid &;

That is very simple and reliable way to do logging, but it has some major drawbacks:

  • As there’s a lot of logging happening, Loklak would constantly do IO-Operations (that is writing to disk). On modern computers, that is usually a bottleneck which costs alot of CPU-time and constantly triggers the harddrive, which is often very slow. Even with modern SSDs, it’s good to avoid to many IO-Operations.
  • It doesn’t give us any control about the log-files. We can’t set a maximum size of the file. In one case it triggered my system to fill up the whole drive, therefor creating stability problems for the whole system

The interesting part was, that in most places, the logging was already done via a function call which is actually ment to log to a real logging backend. So I searched for some solution which would allow us to use our logging function with a real backend that would handle it smartly, without having to change to much code and tell everyone about it.

Log.getLog().info(“some stuff”);

After some research, I found that logging function we use, actually the one from Jetty, allows to plug in different logging-backends via an abstraction layer called slf4j. All I’d need to do was put the slf4j-jar into the classpath and then add the logging backend I wanted.

After some comparison, I settled with log4j2, a quite mature and efficient system which again proved to be quite easy to use. Put the jars into the classpath, create a config-file and add the path to the config file into java execution line:

cmdline=”$cmdline -server -classpath $CLASSPATH -Dlog4j.configurationFile=$LOGCONFIG org.loklak.LoklakServer >> /dev/null 2>&1 & echo $! > data/loklak.pid &”;

Note that we now pipe output not using the logging system to /dev/null, into the void.


So now was the time to see what the logging-backend would offer us. A brief look into our current logging-config:


property.logPath = data
appenders = file

appender.file.type = RollingFile
appender.file.name = LOGFILE
appender.file.fileName = ${logPath}/loklak.log
appender.file.filePattern = ${logPath}/loklak.log-%i.gz
appender.file.layout.type = PatternLayout
appender.file.layout.pattern = %d{yyyy-MM-dd HH:mm:ss.SSS}:%-5level:%C{1}:%t: %msg%n
appender.file.bufferedIO = true
appender.file.bufferSize = 8192
appender.file.immediateFlush = true

rootLogger.level = info
rootLogger.appenderRefs = file
rootLogger.appenderRef.stdout.ref = LOGFILE

What we see is, that we specify an appender, a RollingFile appender, and use it to log all output by Loklak by making it the rootLogger.

If we needed to, we could also have different loggers for different java classes and write them to multiple different places. For example we could write to a Database instead of a file.

So what are the important points a Loklak admin should know about? Well it’s the following lines:

rootLogger.level = info

appender.file.bufferedIO = true
appender.file.bufferSize = 8192
appender.file.immediateFlush = false

appender.file.filePattern = ${logPath}/loklak.log-%i.gz

What we here specify is to:

  • The logging level: if you are debugging something, you can set it to ‘debug’, if you only care for errors, set it to ‘warn’
  • Use a cache for the log. By default 8KB, it might make sence to even enlarge it on big servers. This makes logging output appear slower in the log.
  • Only write when the cache is full. immediateFlush can be toggled on to always write to file on each logging call. This is very handy for debugging, when you need fast output. Caching with immediate flush toggled on is still faster then writing without caching!
  • When the log-file reaches 10MB, compress it with gzip and rename it to e.g. loklak.log-1.gz. Keep up to three gzipped files around. Here again, it might be desireable to have bigger log-files or more old logs on some system.

Some one-click-deployment hosts like scalingo have their own logging infrastructure, so they can show the output in a web-console. For them we have a second conf-file, making the log print to sdtout again.

To remember: if you want to write something to the log, use either of these:

Log.getLog().info(“some stuff”);

Log.getLog().warn(“some stuff”);

Log.getLog().debug(“some stuff”);

If you want to print a stacktrace, simply add the exception to it:

catch(Exception e){

Log.getLog().warn(“some stuff”, e);


That’s about the logging. While not being much work, it should improve performance and save energy on many systems.


A second, for me very confusing thing, was that the start-script always reported a successful boot of Loklak, even if it failed directly. This was especially nasty when I started one Loklak installation from one folder, forgot to stop it and tried to start another from a different folder. Of course I wouldn’t see what I expected until looking into the log-file.

The reason for that was that the start-script (start.sh) simple called

cmdline=$cmdline -server -classpath $CLASSPATH org.loklak.LoklakServer >> data/loklak.log 2>&1 & echo $! > data/loklak.pid &;

eval $cmdline

echo loklak server started at port $CUSTOMPORT, open your browser at $LOCALHOST

So to make the script check for a succesful start, I borrowed a trick from the stop-script (stop.sh).

It uses a so called PID-file. Note that in the cmdline, the process ID of Loklak gets written to /data/loklak.pid

Internally, one of the first things Loklak does on startup is

File pid = new File(dataFile, “loklak.pid”);
if (pid.exists()) pid.deleteOnExit();

deleteOnExit tells the java virtual mashine to remove the file when it shuts down. So whenever Loklak shuts down, for whatever reason, the file is gone. (Exception: when the jvm failes unexpectedly. Very rare)

So what the stop-script does is:

  • look into the PID-file for the process ID
  • tell the operating-system to send a shut-down signal to it
  • wait until the PID-file disappears, so we know it shut down

For the startup check I just added another file, startup.tmp, and handle it as follows:

  • Like on the PID-file, Loklak calls deleteOnExit on it
  • When Loklak fails to start up correctly, the file would just disappear
  • When the startup finished correctly, Loklak writes ‘done’ into it

So what the startup-script does is just:

  • Wait until startup.tmp either disappears (and print an error message)
  • Or until it has ‘done’ as content, telling us everything is right

Simple story but very convenient for the user.

A very nice side effect of this is, that our travis build system now checks if the server starts up successfully. If we have some error in the code or the config, we know when creating a push request.


We now have a propper logging backend and startup-script.

If you want to log something in the java code, use

Log.getLog().info(“some stuff”);

or the other variations as mentioned above.

That’s it for the first post. The next one will be about the login system or the new peer features.


As of today, June 12th 2016, Elasticsearch, whos output wasn’t logged before at all, logs now to the general Loklak.log file.

See https://github.com/loklak/loklak_server/pull/572 for details

Under the hood: logging and start-script