Susi Rule-Score Hierarchy Brainstorming

For Susi’s score system, we need a hierarchy to assign good score values to the rules. To do so we should develop a hierarchy to find an easy system that can be used to assign scores to new rules.

Please add your suggestions below, as you add your ideas we will change a score hierarchy suggestion below.

Preliminary Consideration: Patterns

We have two kinds of rules: such with patterns and others without. The meansing of such rules are:

with pattern(s):

  • (P+LR) variables in pattern should be used for retrieval in internal Susi’s log (reflection memory)
  • (P+IR) variables in pattern should be used for retrieval in internal databases
  • (P+ER) variables in pattern should be used for retrieval in external databases
  • (P+LS) variables in pattern should be stored in Susi’s memory to be used for reflection later
  • (P+IS) variables in pattern should be stored in internal databases to be used for retrieval later
  • (P+ES) variables in pattern should be stored in external databases to be used for retrieval later

without any pattern:

  • (P-D) default answers if no other rule applies
  • (P-O) overruling of rules which would apply, but should not

Secondary Consideration: Purpose

We have three kinds of purposes for Susi answers:

  • (/A) to answer on the users question
  • (/Q) to ask a question to the user in the context of an objective within Susi’s planning to do a conversation
  • (/R) to answer on an answer of the user within the planning of Susi to do a conversation. It appears clear that answers in the context of a Susi conversation strategy should have higher priority.

Combinations of Pattern and Purpose Considerations:

To combine the various Pattern and Purpose types, we write the abbreviations of these items together. For example, we want to answer on a question of the user “Are you happy” with “Yes!, Are you happy as well?” which would be an rule of type P-O/Q. The combination of the both consideration types give 8×3=24 possibilities.

Score Hierarchy

I believe there should be
– score(R) > score(Q) > score(A):
to do a steering of conversations within a conversation plan.
– score(P-O) > score(P+?) > score(P-D):
overruling of pattern-directed answers and default answers in case of pattern fail
– score(P+?S) > score(P+?R):
storing of information (= learning) is more important than answering
– score(P+L?) > score(P+I?) > score(P+E?):
using local information is primary above external information. Reflection is most important.

This produces the following order (with decreasing score, first line has highest sore):

– Overruling of patterns:
– R/P-O
– Q/P-O
– A/P-O

– Answer on an Answer of the user using patterns, possibly learning, otherwise retrieving data
– R/P+LS
– R/P+IS
– R/P+ES
– R/P+LR
– R/P+IR
– R/P+ER

– Asking the user a question with the purpose of learning with the users answer
– Q/P+LS
– Q/P+IS
– Q/P+ES
– Q/P+LR
– Q/P+IR
– Q/P+ER

– Just giving an answer to the question of the user
– A/P+LS
– A/P+IS
– A/P+ES
– A/P+LR
– A/P+IR
– A/P+ER

– Fail-over if no other rule apply to just answer anything, but try to start a new conversation
– R/P-D
– Q/P-D
– A/P-D

Susi Rule-Score Hierarchy Brainstorming

Big Data Collection with Desktop version of loklak wok

A desktop version of loklak wok is now available. The goal of the wok is to enable users to collect and parse data from social services like twitter and enable users, citizen scientists and companies to analyze big data.

The origin of the project is a tweet by @Frank_gamefreak. Thank you!

Please join the development on GitHub:


How to compile and run

  • import required lib by running
  • compile with mvn clean install -Pexecutable-jar
  • run artifact in target dircetory: java -jar wok-desktop-0.0.1-SNAPSHOT-jar-with-all-dependencies.jar
  • stop program with ESC key

To be done

  • The code has been hacked and butchered and is some kind of Frankenstein. It needs cleanup.
  • Font size is hardcoded. How ugly is that?
  • It would be cool to have a project for code shared between Android and Desktop version.
  • The only dependency which can not be resolved via Maven is loklakj. Wouldn’t it be cool to change that?
  • The used font does not seem to support Asian characters.
Big Data Collection with Desktop version of loklak wok

Elasticsearch node built into loklak

loklak has a built-in elasticsearch node, bit it can also connect as a transport client to a elasticsearch cluster. Here are some screenshots of elastic-hq of a 16-shard 8-disk 2-server 16-core loklak cluster.






Elasticsearch node built into loklak

Android Twitter Search App with loklak

Everyone can create an app using the loklak_wok_android libraries. We now have an Android Tweet Search App, that fetches results using the TwitterScraper class.

Check out the code:


  • Devices running Android-KitKat 4.4 or greater are supported.
  • Android Studio

Project setup

  • Download and setup Android Studio
  • Clone/ download this project. Cloning is recommended if you plan to contribute
  • Navigate to the directory where you saved this project and select the root folder ,and hit OK.
  • Wait for Android Studio to build the project with Gradle.
  • Once the build is complete, you can start playing around!
  • You can test it by running it on either a real device or an emulated one by going to Run>Run ‘app’ or presing the Run icon in the toolbar.


Type a query and boom!


Watch out for the WiFi! This App only operates under WiFi


Android Twitter Search App with loklak

Growing list of API libraries for loklak

We are very happy that the list of API libraries for loklak is constantly growing. Please check out the following project to create applications with loklak:

Growing list of API libraries for loklak

Screencast Tutorial How to install loklak server

bash-3.2$ # Hey guys. Welcome to a tutorial on loklak server.
bash-3.2$ # Let us learn how to install loklak server using Terminal!
bash-3.2$ # Loklak is an open source twitter indexer
bash-3.2$ # You can harvest 00:00


Screencast Tutorial How to install loklak server

Data Collection and Parsing on Android with loklak wok

We now have a data parser to collect data, that you want to analyze. It is called loklak wok and runs on your android phone. The showcase collects tweet data for loklak.

Please check it out and test development!



Data Collection and Parsing on Android with loklak wok

Tweet analytics with loklak and Kibana as a search front-end

You can use Kibana to analyze large amounts of Tweet data as a source for statistical data. Please find more info on

Kibana is a tool to “explore and visualize your data”. It is not actually a search front-end but you can use it as such. Because Kibana is made for elasticsearch, it will instantly fit on loklak without any modification or configuration. Here is what you need to do:


Kibana is pre-configured with default values to attach to an elasticsearch index containing logstash data. We will use a differnt index name than logstash: the loklak index names are ‘messages’ and ‘users’. When the Kibana Settings page is visible in your browser, do:

  • On the ‘Configure an index pattern’ Settings-page of the kibana interface, enter “messages” (without the quotes) in the field “Index name or pattern”.
  • As soon as you typed this in, another field “Time-field name” appears, with a red border and empty. Use the selectbox-arrows on the right side of the empty field to select one entry which is there: “created_at”.
  • Push the ‘Create’ button.

A page with the name “messages” appears and shows all index fields of the loklak messages index. If you want to search the index from Kibana, do:

  • Click on “Discover” in the upper menu bar.
  • You may probably see a page with the headline “No results found”. If your loklak index is not empty, this may be caused by a too tight time range; therefore the next step should solve that:
  • Click on the time picker in the top right corner of the window and select (i.e.) “This month”.
  • A ‘searching’ Message appears, followed with a search result page and a histogram at the top.
  • replace the wild-card symbol ‘*’ in the query input line with a word which you want to search, i.e. ‘fossasia’
  • You can also select a time period using a click-drag over the histogram to narrow the search result.
  • You can click on the field names on the left border to show a field facet. Click on the ‘+’-sign at the facet item to activate the facet.

The remote search to twitter with the twitter scraper is not done using the elasticsearch ‘river’ method to prevent that a user-frontend like Kibana constantly triggers a remote search. Therefore this search method with kibana will not help to enrich your search index with remote search results. This also means that you won’t see any results in Kibana until you searched with the /api/search.json api.

Tweet analytics with loklak and Kibana as a search front-end