CentOS 7.2 VM hangs on startup

Host :  Windows 10
Guest : CentOS 7.2 64x
Tool : Oracle VirtualBox 5.2

Problem:

For an unknown reason, your VM won’t startup and hangs on boot. You have tried like me many VBoxManage commands, transform raw to vdi, file access rights, … but you’re still on this screen:

CentOS hangs here

 

Solution :

When your machine gives you the boot choice, press “e” to enter the grub configuration. Look for “rhgb” parameter in the kernel boot commands and delete it.

2017-06-26_1049

This command should be in the line starting with “linux16”.

Add the command “systemd.unit=multi-user.target” at the end of this line.

Press Ctrl+x to start the VM and maybe everything will work fine.

Logstash as CSV pump to ElasticSearch

logstash-logo

Ever wondered if the Logstash can pump comma separated data into ElasticSearch? Yes, the whiskers man can do more!

This configuration was tested against version 2.4.0 and ElasticSearch 2.4.1 on Windows 7.Es-icon

Our input data are tab separated values of academic articles formatted as :

CODE   TITLE   YEAR

An excerpt from our input :


W09-2307    Discriminative Reordering with Chinese Grammatical Relations Features    2009
W04-2607    Non-Classical Lexical Semantic Relations    2004
W01-1314    A System For Extraction Of... And Semantic Constraints    2001
W04-1910    Bootstrapping Parallel Treebanks    2004
W09-3306    Evaluating a Statistical CCG Parser on Wikipedia    2009

I created a file named tab-articles.conf to process current input :


input {
file {
path => ["D:/csv/*.txt"]
type => "core2"
start_position => "beginning"
}
}

filter {
csv {
columns => ["code","title","year"]
separator => "    "
}
}

output {
elasticsearch {
action => "index"
<strong>hosts => ["localhost"] </strong>
index => "papers-%{+YYYY.MM.dd}"
workers => 1
}

}

Note that the filter.separator field does not contain “\t” (=TAB character), but the raw value of a TAB as written in the input file.
Note also that the output server attribute is [hosts] and not [host].

Now, check if your ElasticSearch server is running, and run the following command:


logstash.bat -f tab-articles.conf

If you already have a file called articles.txt under d:\csv, it won’t be injected into ES, bacause logstash is mainly intended for logs parsing, and thus acts by default as a “tail -f” reader.

So, after starting Logstash, copy your file into the configured input directory.

Nutch & Solr 6

Apache Nutch
Intro

This “project” is about setting up Apache Nutch (v1.12) and Solr(6). The main change between this configuration and older ones is that Solr 6 does no longer use the schema.xml file for documents parsing. Instead, Solr uses the managed schema, which config file starts with :


&lt;!-- Solr managed schema - automatically generated - DO NOT EDIT --&gt;

So, just ignore it. Backup the genuine file and start editing. This file is usually located under {SOLR_INSTALL_DIR}/server/solr/{core.dir}/conf/

In my case for instance, it is : /opt/solr621/server/solr/nutchdir/conf/.

You can use the file attached in this project or edit it yourself by adding fields from the schema.xml under {NUTCH_INSTALL}/conf.
Using NUTCH

Let /opt/nutch112 be our install directory for Nutch. We’ll be carwling on Amazon.com for some BSR. To crawl using Apache Nutch, follow these steps:


mkdir /opt/nutch112/urls /opt/nutch112/amzcom

echo https://www.amazon.com/Best-Sellers-Sports-Outdoors/zgbs/sporting-goods/ref=zg_bs_nav_0 &gt; /opt/nutch112/urls/seeds.txt

bin/nutch inject amzcom/db urls

bin/nutch generate amzcom/db amzcom/segs

s1=`ls -d crawl/segments/2* | tail -1`

bin/nutch fetch $s1

bin/nutch parse $s1

bin/nutch updatedb amzcom/db

bin/nutch generate amzcom/db amzcom/segs

s1=`ls -d crawl/segments/2* | tail -1`

bin/nutch fetch $s1

You can repeat these steps as much as you want.

Once done with fetching,


bin/nutch invertlinks amzcom/linkdb -dir amzcom/segs

Now, you can either dump the binary segments to see the content or read the link database to show parsed links or move data to Solr. To dump physical files from stored segments :


bin/nutch dump -segment amzcom/segs/ -outputDir amzcom/dump0/

To read the fetched links from the latest segment:

bin/nutch readlink $s1 -dump amzcom/dumplnk

To migrate data to Solr (must be running) and you must have at least one core configured with the updated managed-schema.

bin/nutch solrindex http://slmsrv:8983/solr/nutch amzcom/db/ -linkdb amzcom/linkdb/ amzcom/segs/20160924132600 -filter -normalize

 

De-Googling myself! (Step 2)

After kicking Gmail out, there are still some problems with mailing transfer. I managed to get a full backup using this old but quite efficient tool: Gmail Backup. I think I’ll be using Thunderbird for desktop use.

 

2016-09-15_16-13-09

Next big thing to deal with, is moving my GDrive data somewhere else, outside my computer and it must not be DropBox. I’m not sure yet it is the best one, but I like the way Kim Dotcom deals with his problems. So, I decided to stick to megaupload v2: Mega.

Cloudwards suggests sync.com as the best alternative to DropBox even if Mega offers 50GB  (vs 5GB to sync.com) of forver free storage. This is what I need for the moment. Moreover, Mega has a valuable tool called MEGAsync. So, all I had to do is download my drive to a single directory using the desktop Google drive app, and then point it a s a source of Sync to MEGASync… then I just kept my computer connected for one night :).

De-Googling myself! (Step 1)

Many of us will never realize how much we are Google dependent until we reach the maximum free storage capacity.

2016-09-15_10-59-05

But, what if google decides to remove the free offer? what can prevent them from doing it? Whatsapp tried but let down after a while … Google is not Whatsapp. Did ever think how much google knows about you?

So I decided to start degoogling my self. There are a lot of alternatives, I have just to be more patient/tolerant with open source ones and choose carefully. Uninstalled the greedy  Google Chrome, downloaded all my files from the Google Drive and looking for a new solution for mailing and remote storage.

Many “Alternatives To” website suggest using “mail.com”. Seems clean and quite interresting as a domain name. But, when I receive this kind of messages on registration, I’m not sure I can go further.

2016-09-15_10-56-13

Then, I came across this beautiful “TutaNota”.

2016-09-15_11-10-38

A replacement for Google Services. This is what I’m looking for.

And here where you can join now : sba@keemail.me

Roadmap to master BigData World

data_scientist

Source(nirvacana.com)

SMS continuous service with Smslib

In Order to create and SMS service, to send sms periodically, you can use Quartz crons mixed with Smslib on a Spring Framework boil.

To set up a quartz cron, please refer this post. Next, we’ll need three Sql tables:

– Active Sms
– Archive Sms
– Smpp Gateway Configuration

Prerequiste library can be found here

Provided Senario:

Scenario

Download links for the latest offline Flash Player Installer

Latest Update (05 April 2016) – Flash Player 21

Tired of searching, here the direct download links of the offline Flash Player Installer :

Found after some search here:

https://helpx.adobe.com/flash-player/kb/installation-problems-flash-player-windows.html#main-pars_header

Load balancing with Apache HTTP

Balanced on the same server

 

workers.properties

worker.list=router,status
worker.worker1.port=8009
worker.worker1.host=localhost
worker.worker1.type=ajp13
worker.worker1.lbfactor=1
worker.worker1.local_worker=1
worker.worker1.ping_timeout=1000
worker.worker1.socket_timeout=10
worker.worker1.ping_mode=A
#sticky session is not interresting here, just for explanation
#sticky session = stick to the server that gave you the first session-id
worker.worker1.sticky_session=true
#this property will make your load balancer woprking in active passive mode
#All requests would be mapped to worker1, but
#on worker1 (server1:8009) failure, all requests will be redirected to worker2
worker.worker1.redirect=worker2

worker.worker2.port=8090
worker.worker2.host=localhost
worker.worker2.type=ajp13
worker.worker2.lbfactor=1
worker.worker2.activation=disabled
worker.worker2.ping_timeout=1000
worker.worker2.socket_timeout=10
worker.worker2.ping_mode=A
worker.worker2.local_worker=0
worker.worker2.sticky_session=true

worker.router.type=lb
worker.router.balanced_workers=worker1,worker2
worker.router.sticky_session=true
worker.status.type=status

httpd.conf

...
LoadModule jk_module modules/mod_jk.so
...
#go to the end of the file and add the
JkWorkersFile /etc/httpd/conf/workers.properties
JkShmFile /etc/httpd/logs/mod_jk.shm
JkLogFile /etc/httpd/logs/mod_jk.log
JkLogLevel debug
# Configure monitoring the LB using jkstatus
JkMount /jkstatus/* status
# Configure your applications (may be using context root)
JkMount /myRootContext* router

ANT could not find home… variables

Ant lost, could not find home

I was tring to compile scoop using the “ant package” command when I ran into the following error:

Error:

Could not find or load main class org.apache.tools.ant.launch.Launcher

Debug:

ant –execdebug

exec “/usr/lib/jvm/jdk1.7.0_79//bin/java” -classpath “/usr/bin/build-classpath: error: JVM_LIBDIR /usr/lib/jvm-exports/jdk1.7.0_79 does not exist or is not a directory:/usr/bin/build-classpath: error: JVM_LIBDIR /usr/lib/jvm-exports/jdk1.7.0_79 does not exist or is not a directory:/usr/lib/jvm/jdk1.7.0_79//lib/tools.jar” -Dant.home=”/usr/share/ant” -Dant.library.dir=”/usr/share/ant/lib” org.apache.tools.ant.launch.Launcher -cp “”
Error: Could not find or load main class org.apache.tools.ant.launch.Launcher

Quick fix:

Missing directory, create an empty one.

             mkdir /usr/lib/jvm-exports/jdk1.7.0_79