Logstash as CSV pump to ElasticSearch

logstash-logo

Ever wondered if the Logstash can pump comma separated data into ElasticSearch? Yes, the whiskers man can do more!

This configuration was tested against version 2.4.0 and ElasticSearch 2.4.1 on Windows 7.Es-icon

Our input data are tab separated values of academic articles formatted as :

CODE   TITLE   YEAR

An excerpt from our input :


W09-2307    Discriminative Reordering with Chinese Grammatical Relations Features    2009
W04-2607    Non-Classical Lexical Semantic Relations    2004
W01-1314    A System For Extraction Of... And Semantic Constraints    2001
W04-1910    Bootstrapping Parallel Treebanks    2004
W09-3306    Evaluating a Statistical CCG Parser on Wikipedia    2009

I created a file named tab-articles.conf to process current input :


input {
file {
path => ["D:/csv/*.txt"]
type => "core2"
start_position => "beginning"
}
}

filter {
csv {
columns => ["code","title","year"]
separator => "    "
}
}

output {
elasticsearch {
action => "index"
<strong>hosts => ["localhost"] </strong>
index => "papers-%{+YYYY.MM.dd}"
workers => 1
}

}

Note that the filter.separator field does not contain “\t” (=TAB character), but the raw value of a TAB as written in the input file.
Note also that the output server attribute is [hosts] and not [host].

Now, check if your ElasticSearch server is running, and run the following command:


logstash.bat -f tab-articles.conf

If you already have a file called articles.txt under d:\csv, it won’t be injected into ES, bacause logstash is mainly intended for logs parsing, and thus acts by default as a “tail -f” reader.

So, after starting Logstash, copy your file into the configured input directory.

Nutch & Solr 6

Apache Nutch
Intro

This “project” is about setting up Apache Nutch (v1.12) and Solr(6). The main change between this configuration and older ones is that Solr 6 does no longer use the schema.xml file for documents parsing. Instead, Solr uses the managed schema, which config file starts with :


&lt;!-- Solr managed schema - automatically generated - DO NOT EDIT --&gt;

So, just ignore it. Backup the genuine file and start editing. This file is usually located under {SOLR_INSTALL_DIR}/server/solr/{core.dir}/conf/

In my case for instance, it is : /opt/solr621/server/solr/nutchdir/conf/.

You can use the file attached in this project or edit it yourself by adding fields from the schema.xml under {NUTCH_INSTALL}/conf.
Using NUTCH

Let /opt/nutch112 be our install directory for Nutch. We’ll be carwling on Amazon.com for some BSR. To crawl using Apache Nutch, follow these steps:


mkdir /opt/nutch112/urls /opt/nutch112/amzcom

echo https://www.amazon.com/Best-Sellers-Sports-Outdoors/zgbs/sporting-goods/ref=zg_bs_nav_0 &gt; /opt/nutch112/urls/seeds.txt

bin/nutch inject amzcom/db urls

bin/nutch generate amzcom/db amzcom/segs

s1=`ls -d crawl/segments/2* | tail -1`

bin/nutch fetch $s1

bin/nutch parse $s1

bin/nutch updatedb amzcom/db

bin/nutch generate amzcom/db amzcom/segs

s1=`ls -d crawl/segments/2* | tail -1`

bin/nutch fetch $s1

You can repeat these steps as much as you want.

Once done with fetching,


bin/nutch invertlinks amzcom/linkdb -dir amzcom/segs

Now, you can either dump the binary segments to see the content or read the link database to show parsed links or move data to Solr. To dump physical files from stored segments :


bin/nutch dump -segment amzcom/segs/ -outputDir amzcom/dump0/

To read the fetched links from the latest segment:

bin/nutch readlink $s1 -dump amzcom/dumplnk

To migrate data to Solr (must be running) and you must have at least one core configured with the updated managed-schema.

bin/nutch solrindex http://slmsrv:8983/solr/nutch amzcom/db/ -linkdb amzcom/linkdb/ amzcom/segs/20160924132600 -filter -normalize

 

De-Googling myself! (Step 2)

After kicking Gmail out, there are still some problems with mailing transfer. I managed to get a full backup using this old but quite efficient tool: Gmail Backup. I think I’ll be using Thunderbird for desktop use.

 

2016-09-15_16-13-09

Next big thing to deal with, is moving my GDrive data somewhere else, outside my computer and it must not be DropBox. I’m not sure yet it is the best one, but I like the way Kim Dotcom deals with his problems. So, I decided to stick to megaupload v2: Mega.

Cloudwards suggests sync.com as the best alternative to DropBox even if Mega offers 50GB  (vs 5GB to sync.com) of forver free storage. This is what I need for the moment. Moreover, Mega has a valuable tool called MEGAsync. So, all I had to do is download my drive to a single directory using the desktop Google drive app, and then point it a s a source of Sync to MEGASync… then I just kept my computer connected for one night :).

De-Googling myself! (Step 1)

Many of us will never realize how much we are Google dependent until we reach the maximum free storage capacity.

2016-09-15_10-59-05

But, what if google decides to remove the free offer? what can prevent them from doing it? Whatsapp tried but let down after a while … Google is not Whatsapp. Did ever think how much google knows about you?

So I decided to start degoogling my self. There are a lot of alternatives, I have just to be more patient/tolerant with open source ones and choose carefully. Uninstalled the greedy  Google Chrome, downloaded all my files from the Google Drive and looking for a new solution for mailing and remote storage.

Many “Alternatives To” website suggest using “mail.com”. Seems clean and quite interresting as a domain name. But, when I receive this kind of messages on registration, I’m not sure I can go further.

2016-09-15_10-56-13

Then, I came across this beautiful “TutaNota”.

2016-09-15_11-10-38

A replacement for Google Services. This is what I’m looking for.

And here where you can join now : sba@keemail.me

Roadmap to master BigData World

data_scientist

Source(nirvacana.com)

SMS continuous service with Smslib

In Order to create and SMS service, to send sms periodically, you can use Quartz crons mixed with Smslib on a Spring Framework boil.

To set up a quartz cron, please refer this post. Next, we’ll need three Sql tables:

– Active Sms
– Archive Sms
– Smpp Gateway Configuration

Prerequiste library can be found here

Provided Senario:

Scenario

Download links for the latest offline Flash Player Installer

Latest Update (05 April 2016) – Flash Player 21

Tired of searching, here the direct download links of the offline Flash Player Installer :

Found after some search here:

https://helpx.adobe.com/flash-player/kb/installation-problems-flash-player-windows.html#main-pars_header

Load balancing with Apache HTTP

Balanced on the same server

 

workers.properties

worker.list=router,status
worker.worker1.port=8009
worker.worker1.host=localhost
worker.worker1.type=ajp13
worker.worker1.lbfactor=1
worker.worker1.local_worker=1
worker.worker1.ping_timeout=1000
worker.worker1.socket_timeout=10
worker.worker1.ping_mode=A
#sticky session is not interresting here, just for explanation
#sticky session = stick to the server that gave you the first session-id
worker.worker1.sticky_session=true
#this property will make your load balancer woprking in active passive mode
#All requests would be mapped to worker1, but
#on worker1 (server1:8009) failure, all requests will be redirected to worker2
worker.worker1.redirect=worker2

worker.worker2.port=8090
worker.worker2.host=localhost
worker.worker2.type=ajp13
worker.worker2.lbfactor=1
worker.worker2.activation=disabled
worker.worker2.ping_timeout=1000
worker.worker2.socket_timeout=10
worker.worker2.ping_mode=A
worker.worker2.local_worker=0
worker.worker2.sticky_session=true

worker.router.type=lb
worker.router.balanced_workers=worker1,worker2
worker.router.sticky_session=true
worker.status.type=status

httpd.conf

...
LoadModule jk_module modules/mod_jk.so
...
#go to the end of the file and add the
JkWorkersFile /etc/httpd/conf/workers.properties
JkShmFile /etc/httpd/logs/mod_jk.shm
JkLogFile /etc/httpd/logs/mod_jk.log
JkLogLevel debug
# Configure monitoring the LB using jkstatus
JkMount /jkstatus/* status
# Configure your applications (may be using context root)
JkMount /myRootContext* router

ANT could not find home… variables

Ant lost, could not find home

I was tring to compile scoop using the “ant package” command when I ran into the following error:

Error:

Could not find or load main class org.apache.tools.ant.launch.Launcher

Debug:

ant –execdebug

exec “/usr/lib/jvm/jdk1.7.0_79//bin/java” -classpath “/usr/bin/build-classpath: error: JVM_LIBDIR /usr/lib/jvm-exports/jdk1.7.0_79 does not exist or is not a directory:/usr/bin/build-classpath: error: JVM_LIBDIR /usr/lib/jvm-exports/jdk1.7.0_79 does not exist or is not a directory:/usr/lib/jvm/jdk1.7.0_79//lib/tools.jar” -Dant.home=”/usr/share/ant” -Dant.library.dir=”/usr/share/ant/lib” org.apache.tools.ant.launch.Launcher -cp “”
Error: Could not find or load main class org.apache.tools.ant.launch.Launcher

Quick fix:

Missing directory, create an empty one.

             mkdir /usr/lib/jvm-exports/jdk1.7.0_79

Running sample Mahout Job on Hadoop Multinode cluster

This slideshare introduction is quite interresting. It explains how K-Means algorithm works.

(Credit to Subhas Kumar Ghosh)

One common problem with Hadoop, is the unexplained hang when running a sample job. For instance, I’ve been testing Mahout (cluster-reuters) on a Hadoop multinode cluster (1 namenode, 2 slaves). A sample trace in my case looks like this listing:

15/10/17 12:09:06 INFO YarnClientImpl: Submitted application application_1445072191101_0026
15/10/17 12:09:06 INFO Job: The url to track the job: http://master.phd.net:8088/proxy/application_1445072191101_0026/
15/10/17 12:09:06 INFO Job: Running job: job_1445072191101_0026
15/10/17 12:09:14 INFO Job: Job job_1445072191101_0026 running in uber mode : false
15/10/17 12:09:14 INFO Job:  map 0% reduce 0%

The jobs web console told me that the job State=Accepted, Final Status = UNDEFINED and the tracking UI was UNASSIGNED.

First thing I suspected, was a warning thrown by hadoop binary:

WARN NativeCodeLoader: Unable to load native-hadoop library for your 
platform... using builtin-java classes where applicable

Absolutely nothing to do with my problem. I rebuilt this jar from sources, but the job still hangs.

I reviewed the namenode logs. Nothing special. Then the Yarn different logs(resourcemanager, nodemanager). No problems. Slaves logs. Same thing.

As we don’t have much information from Hadoop logs, I went through the net for similar problems. It seems that this is a common problem related to memory configuration. I wonder why such problems are not yet logged (Hadoop 2.6). Even if I analyzed the memory consumption using JConsole, but nothing was alarming with it.

All the used machines are CentOS 6.5 virtual machines hosted on a 16 Gb RAM, i7-G5 laptop. After connecting and configuring the three machines, I realized that the allowed disk space (15Gb for the namenode, 6Gb for slaves ) is not enough. Checking the disk space usage (df  -h), only 3% of the disk space were available on the two slaves. This could be an issue, but Hadoop reports such errors.

Looking in yarn-site.xml, I remembered that I gave Yarn 2Gb to run this test jobs.

    <property>    
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>2024</value>
    </property>
    <property>
        <name>yarn.scheduler.minimum-allocation-mb</name>
        <value>512</value>
    </property>

I tried doubling this value on the namenode:

    <property>    
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>4096</value>
    </property>
    <property>
        <name>yarn.scheduler.minimum-allocation-mb</name>
        <value>512</value>
    </property>

Then I propagated changes to slaves and restarted everything (stop-dfs.sh && stop-yarn.sh && start-yarn.sh && start-dfs.sh). then

    ./cluster-reuters.sh

It works, finally 🙂

Now, I’m trying to visualize the clustering results using Gephi right now.

Gephi Sample clusters graph