RSS

Category Archives: Application Server

Install RabbitMQ on Windows

 

  1. Download rabbitmq at:
  2. Install
  3. Config Manangement
    • Open cmd
    • go to this path: C:\Program Files (x86)\RabbitMQ Server\rabbitmq_server-3.3.4\sbin
    • input rabbitmq-plugins.bat enable rabbitmq_management and press enter key
    • rabbitmq-service.bat stop
    • rabbitmq-service.bat install
    • rabbitmq-service.bat start
  4. Start use: http://localhost:15672
    • User: guest
    • Password: guest
  5. Crate User:
    • Add a new/fresh user, say user ‘test’ and password ‘test’
      rabbitmqctl add_user test test
    • Give administrative access to the new access
      rabbitmqctl set_user_tags test administrator
    • Set permission to newly created user
      rabbitmqctl set_permissions -p / test ".*" ".*" ".*"
  6. How to allow Guest login via IP
    • C:\Users\[User Name]\AppData\Roaming\RabbitMQ\rabbitmq.config
    • And add: [{rabbit, [{loopback_users, []}]}]. 
 
Leave a comment

Posted by on February 14, 2017 in RabbitMQ

 

Get started with RabbitMQ on Android (Android Studio)

By: LOVISA JOHANSSON (cloudamqp)

This guide explains how to create a simple chat application in Android using Android Studio and RabbitMQ. Everyone that has the application will be able to send and receive messages from all other users that are using the same application.

If you are using Eclipse, check out this blog post instead.

In the code given, messages will first be added to an internal queue and the publisher will send messages from the internal queue to RabbitMQ when there is a connection established. The message will be added back to the queue if the connection is broken.

RabbitMQ Android

This guide assumes that you have downloaded, installed and set up everything correct for Android Studio.

Start by creating a new Android project, open Android Studio and go to File -> New -> New Project..

1. Configure your new project

  1. Enter project information as specified below.create new android project
  2. Select the form factor your app will run onandroid studio
  3. Select if you like to add an activity to your app or not. In this example we choose Blank Activity to get autogenerated files for the project.add android activity
  4. Customize the Activitycustomize android activity

2. Add Java AMQP library to project

RabbitMQ has developed an excellent Java AMQP library. The full API documentation for the library can be found here.

We need to include the RabbitMQ Java Client Library and reference the jar files into the project. In Android Studio you can create a libs folder in the same level as the app. Copy and past the jars in to this libs folder. Mark all the jar files and press “Add As Library…” as seen in the image below.

add rabbitmq library

You can confirm that the libs has been added as library by opening build.gradle and check under dependencies, all files should seen be there.

dependencies {
  ...
  compile files('libs/rabbitmq-client.jar')
  ...
}

NOTE: Only if you are using Android Gradle plugin 0.7.0 and do get the error “Duplicate files copied in APK” when you later run your application, you need to add packagingOptions to your build.gradle file as specified in here.

android {
  packagingOptions {
    exclude 'META-INF/LICENSE.txt'
    exclude 'META-INF/NOTICE.txt'
  }
}

3. Android Manifest, internet permission

We need to tell the Android system that this app is allowed to access internet. Open the AndroidManifest.xml file, located in the root of the project. Add the user permission android.permission.INTERNET just before the closing /manifest tag.

<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
      package="com.cloudamqp.rabbitmq"
     android:versionCode="1"
     android:versionName="1.0">
     ......
     <uses-permission android:name="android.permission.INTERNET"></uses-permission>
</manifest>

4. Start coding

Layout

Create the view for the application. The .xml layout file can be found under res->layout. What we have here is a root ScrollView containing a

EditText a Button and a TextView The EditText will be used as an input field for the text that will be sent. The text will be published when the button is pressed and all messages received by the subscriber will be printed to the TextView.

<ScrollView xmlns:android="http://schemas.android.com/apk/res/android"
  ...
  <EditText
  android:id="@+id/text"
  android:layout_width="fill_parent"
  android:background="#ffffff"
  android:hint="Enter a message" />

  <Button
  android:id="@+id/publish"
  android:layout_width="match_parent"
  android:layout_height="wrap_content"
  android:layout_below="@+id/text"
  android:text="Publish message" />

  <TextView
  android:id="@+id/textView"
  android:layout_width="match_parent"
  android:layout_height="wrap_content"
  android:layout_below="@+id/publish"
  android:textColor="#000000" />
  ...
</ScrollView>

Publish

Create an internal message queue. In this case is a BlockingDeque used. Blockingqueues implementations are designed to be used primarily for producer-consumer queues.

private BlockingDeque<String> queue = new LinkedBlockingDeque>String>();
void publishMessage(String message) {
  try {
    Log.d("","[q] " + message);
    queue.putLast(message);
  } catch (InterruptedException e) {
    e.printStackTrace();
  }
}

Create a setup function for the ConnectionFactory The connection factory encapsulates a set of connection configuration parameters, in this case the CLOUDAMQP_URL. The URL can be found in the control panel for your instance.

ConnectionFactory factory = new ConnectionFactory();
private void setupConnectionFactory() {
  String uri = "IP";
  try {
    factory.setAutomaticRecoveryEnabled(false);
    //factory.setUri(uri);
    factory.setHost(uri);

  } catch (KeyManagementException | NoSuchAlgorithmException | URISyntaxException e1) {
    e1.printStackTrace();
}

Create a publisher that publish messages from the internal queue. Messages are added back to the queue if an exception is catched. The publisher will try to reconnect every 5 seconds if the connection is broken.

A thread (“background” or “worker” threads or use of the AsyncTask class) is needed when we have operations to perform that are not instantaneous, such as network access when connecting to rabbitMQ.

We will use a fanout exchange. A fanout exchange routes messages to all of the queues that are bound to it and the routing key is ignored. If N queues are bound to a fanout exchange, will a new message that is published to that exchange, be copied and delivered to all N queues. Fanout exchanges are ideal for the broadcast routing of messages.

public void publishToAMQP()
{
  publishThread = new Thread(new Runnable() {
    @Override
    public void run() {
      while(true) {
        try {
          Connection connection = factory.newConnection();
          Channel ch = connection.createChannel();
          ch.confirmSelect();

          while (true) {
            String message = queue.takeFirst();
            try{
              ch.basicPublish("amq.fanout", "chat", null, message.getBytes());
              Log.d("", "[s] " + message);
              ch.waitForConfirmsOrDie();
            } catch (Exception e){
              Log.d("","[f] " + message);
              queue.putFirst(message);
              throw e;
           }
         }
       } catch (InterruptedException e) {
         break;
       } catch (Exception e) {
         Log.d("", "Connection broken: " + e.getClass().getName());
         try {
           Thread.sleep(5000); //sleep and then try again
         } catch (InterruptedException e1) {
           break;
         }
       }
     }
   }
  });
  publishThread.start();
}

Subscriber

We have now created the publisher, and it is time to create the subscriber. The subscriber will take a handler as parameter. The handler will print the messages to the screen when the messages arrives. The subscribe thread will try to reconnect every 5 seconds when the connection gets broken.

void subscribe(final Handler handler)
{
  subscribeThread = new Thread(new Runnable() {
    @Override
    public void run() {
      while(true) {
        try {
          Connection connection = factory.newConnection();
          Channel channel = connection.createChannel();
          channel.basicQos(1);
          DeclareOk q = channel.queueDeclare();
          channel.queueBind(q.getQueue(), "amq.fanout", "chat");
          QueueingConsumer consumer = new QueueingConsumer(channel);
          channel.basicConsume(q.getQueue(), true, consumer);

          while (true) {
            QueueingConsumer.Delivery delivery = consumer.nextDelivery();
            String message = new String(delivery.getBody());
            Log.d("","[r] " + message);
            Message msg = handler.obtainMessage();
            Bundle bundle = new Bundle();
            bundle.putString("msg", message);
            msg.setData(bundle);
            handler.sendMessage(msg);
          }
        } catch (InterruptedException e) {
          break;
        } catch (Exception e1) {
          Log.d("", "Connection broken: " + e1.getClass().getName());
          try {
            Thread.sleep(5000); //sleep and then try again
          } catch (InterruptedException e) {
            break;
          }
        }
      }
    }
  });
  subscribeThread.start();
}

Call all functions listed above from function onCreate The handler used by the subscribe functions is also created in onCreate. A handler has to be used because it is only possible to write to the GUI from the main tread.

@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
  setContentView(R.layout.activity_main);

  setupConnectionFactory();
  publishToAMQP();
  setupPubButton();

  final Handler incomingMessageHandler = new Handler() {
    @Override
    public void handleMessage(Message msg) {
      String message = msg.getData().getString("msg");
      TextView tv = (TextView) findViewById(R.id.textView);
      Date now = new Date();
      SimpleDateFormat ft = new SimpleDateFormat ("hh:mm:ss");
      tv.append(ft.format(now) + ' ' + message + '\n');
    }
  };
  subscribe(incomingMessageHandler);
}

void setupPubButton() {
  Button button = (Button) findViewById(R.id.publish);
  button.setOnClickListener(new OnClickListener() {
    @Override
    public void onClick(View arg0) {
      EditText et = (EditText) findViewById(R.id.text);
      publishMessage(et.getText().toString());
      et.setText("");
   }
  });
}

The subscribe and the publish tread can both be interrupted when the application is destroyed by adding following code in onDestroy

Thread subscribeThread;
Thread publishThread;
@Override
protected void onDestroy() {
  super.onDestroy();
  publishThread.interrupt();
  subscribeThread.interrupt();
}

 

Copy from: https://www.cloudamqp.com/blog/2015-07-29-rabbitmq-on-android.html

 

How to enable GZip compression in XAMPP server

By: TarranJones

When we test our webiste by tools.pingdom.com and we get error:

The following publicly cacheable, compressible resources should have a “Vary: Accept-Encoding” header

 

Find apache\conf\httpd.conf

uncomment the following line(remove #)

LoadModule headers_module modules/mod_deflate.so

some versions may require you to comment out the following lines instead.

LoadModule headers_module modules/mod_headers.so
LoadModule deflate_module modules/mod_deflate.so

finally add this line to your .htaccess file.

SetOutputFilter DEFLATE

Copy from: http://stackoverflow.com/questions/6993320/how-to-enable-gzip-compression-in-xampp-server

 
Leave a comment

Posted by on August 1, 2016 in XAMPP

 

How to install and configure Solr 6 on Ubuntu 16.04

By: www.howtoforge.com

What is Apache Solr? Apache Solr is an open source enterprise-class search platform written in Java which enables you to create custom search engines that index databases, files, and websites. It has back end support for Apache Lucene. It can e.g. be used to search in multiple websites and can show recommendations for the searched content. Solr uses an XML (Extensible Markup Language) based query and result language. There are APIs (Applications program interfaces) available for Python, Ruby and JSON (Javascript Object Notation).

Some other features that Solr provides are:

  • Full-Text Search.
  • Snippet generation and highlighting.
  • Custom Document ordering/ranking.
  • Spell Suggestions.

This tutorial will show you how to install the latest Solr version on Ubuntu 16.04 LTS. The steps will most likely work with later Ubuntu versions as well.

Update your System

Use a non-root sudo user to login into your Ubuntu server. Through this user, you will have to perform all the steps and use the Solr later.

To update your system, execute the following command to update your system with latest patches and updates.

sudo apt-get update && apt-get upgrade -y

Install Ubuntu System updates.

Setting up the Java Runtime Environment

Solr is a Java application, so the Java runtime environment needs to be installed first in order to set up Solr.

We have to install Python Software properties in order to install the latest Java 8. Run the following command to install the software.

root@server1:~# sudo apt-get install python-software-properties
Reading package lists… Done
Building dependency tree
Reading state information… Done
The following additional packages will be installed:
libpython-stdlib libpython2.7-minimal libpython2.7-stdlib python python-apt
python-minimal python-pycurl python2.7 python2.7-minimal
Suggested packages:
python-doc python-tk python-apt-dbg python-apt-doc libcurl4-gnutls-dev
python-pycurl-dbg python-pycurl-doc python2.7-doc binutils binfmt-support
The following NEW packages will be installed:
libpython-stdlib libpython2.7-minimal libpython2.7-stdlib python python-apt
python-minimal python-pycurl python-software-properties python2.7
python2.7-minimal
0 upgraded, 10 newly installed, 0 to remove and 3 not upgraded.
Need to get 4,070 kB of archives.
After this operation, 17.3 MB of additional disk space will be used.
Do you want to continue? [Y/n]

Press Y to continue.

Install Python.

After executing the command, add the webupd8team Java PPA repository in your system by running:

sudo add-apt-repository ppa:webupd8team/java

Press [ENTER] when requested. Now, you can easily install the latest version of Java 8 with apt.

First, update the package lists to fetch the available packages from the new PPA:

sudo apt-get update

Update Ubuntu 16.04

Then install the latest version of Oracle Java 8 with this command:

sudo apt-get install oracle-java8-installer

JDK

sudo apt-get install openjdk-8-jdk

JRE

sudo apt-get install openjdk-8-jre

root@server1:~# sudo apt-get install oracle-java8-installer
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
 binutils gsfonts gsfonts-x11 java-common libfontenc1 libxfont1 x11-common xfonts-encodings xfonts-utils
Suggested packages:
 binutils-doc binfmt-support visualvm ttf-baekmuk | ttf-unfonts | ttf-unfonts-core ttf-kochi-gothic | ttf-sazanami-gothic ttf-kochi-mincho | ttf-sazanami-mincho ttf-arphic-uming firefox
 | firefox-2 | iceweasel | mozilla-firefox | iceape-browser | mozilla-browser | epiphany-gecko | epiphany-webkit | epiphany-browser | galeon | midbrowser | moblin-web-browser | xulrunner
 | xulrunner-1.9 | konqueror | chromium-browser | midori | google-chrome
The following NEW packages will be installed:
 binutils gsfonts gsfonts-x11 java-common libfontenc1 libxfont1 oracle-java8-installer x11-common xfonts-encodings xfonts-utils
0 upgraded, 10 newly installed, 0 to remove and 3 not upgraded.
Need to get 6,498 kB of archives.
After this operation, 20.5 MB of additional disk space will be used.
Do you want to continue? [Y/n]

Press Y to continue.

You MUST agree to the license available in http://java.com/license if you want to use Oracle JDK, clicking on the OK button.

Accept Java License

Downloading Java

The package installs a kind of meta-installer which then downloads the binaries directly from Oracle. After installation process, check the version of Java installed by running the following command

java -version

java version "1.8.0_91"
Java(TM) SE Runtime Environment (build 1.8.0_91-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.91-b14, mixed mode)

Now you have installed Java 8 and we will move to the next step.

Installing the Solr application

Solr can be installed on Ubuntu in different ways, in this article, I will show you how to install the latest package from the source.

We will begin by downloading the Solr distribution. First finding the latest version of the available package from their web page, copy the link and download it using the wget command

For this setup, we will use  http://www.us.apache.org/dist/lucene/solr/6.0.1/

cd /tmp
wget http://www.us.apache.org/dist/lucene/solr/6.0.1/solr-6.0.1.tgz

root@server1:/tmp# wget http://www.us.apache.org/dist/lucene/solr/6.0.1/solr-6.0.1.tgz
--2016-06-03 11:31:54-- http://www.us.apache.org/dist/lucene/solr/6.0.1/solr-6.0.1.tgz
Resolving www.us.apache.org (www.us.apache.org)... 140.211.11.105
Connecting to www.us.apache.org (www.us.apache.org)|140.211.11.105|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 137924507 (132M) [application/x-gzip]
Saving to: ‘solr-6.0.1.tgz’

Now, run the given below command to extract the service installation file:

tar xzf solr-6.0.1.tgz solr-6.0.1/bin/install_solr_service.sh –strip-components=2

And install Solr as a service using the script:

sudo ./install_solr_service.sh solr-6.0.1.tgz

The output will be similar to this:

 root@server1:/tmp# sudo ./install_solr_service.sh solr-6.0.1.tgz
id: ‘solr’: no such user
Creating new user: solr
Adding system user `solr' (UID 111) ...
Adding new group `solr' (GID 117) ...
Adding new user `solr' (UID 111) with group `solr' ...
Creating home directory `/var/solr' ...

Extracting solr-6.0.1.tgz to /opt


Installing symlink /opt/solr -> /opt/solr-6.0.1 ...


Installing /etc/init.d/solr script ...


Installing /etc/default/solr.in.sh ...

? solr.service - LSB: Controls Apache Solr as a Service
 Loaded: loaded (/etc/init.d/solr; bad; vendor preset: enabled)
 Active: active (exited) since Fri 2016-06-03 11:37:05 CEST; 5s ago
 Docs: man:systemd-sysv-generator(8)
 Process: 20929 ExecStart=/etc/init.d/solr start (code=exited, status=0/SUCCESS)

Jun 03 11:36:43 server1 systemd[1]: Starting LSB: Controls Apache Solr as a Service...
Jun 03 11:36:44 server1 su[20934]: Successful su for solr by root
Jun 03 11:36:44 server1 su[20934]: + ??? root:solr
Jun 03 11:36:44 server1 su[20934]: pam_unix(su:session): session opened for user solr by (uid=0)
Jun 03 11:37:05 server1 solr[20929]: [313B blob data]
Jun 03 11:37:05 server1 solr[20929]: Started Solr server on port 8983 (pid=20989). Happy searching!
Jun 03 11:37:05 server1 solr[20929]: [14B blob data]
Jun 03 11:37:05 server1 systemd[1]: Started LSB: Controls Apache Solr as a Service.
Service solr installed.

Use this command to check the status of the service

service solr status

You should see an output that begins with this:

root@server1:/tmp# service solr status
? solr.service - LSB: Controls Apache Solr as a Service
 Loaded: loaded (/etc/init.d/solr; bad; vendor preset: enabled)
 Active: active (exited) since Fri 2016-06-03 11:37:05 CEST; 39s ago
 Docs: man:systemd-sysv-generator(8)
 Process: 20929 ExecStart=/etc/init.d/solr start (code=exited, status=0/SUCCESS)

Jun 03 11:36:43 server1 systemd[1]: Starting LSB: Controls Apache Solr as a Service...
Jun 03 11:36:44 server1 su[20934]: Successful su for solr by root
Jun 03 11:36:44 server1 su[20934]: + ??? root:solr
Jun 03 11:36:44 server1 su[20934]: pam_unix(su:session): session opened for user solr by (uid=0)
Jun 03 11:37:05 server1 solr[20929]: [313B blob data]
Jun 03 11:37:05 server1 solr[20929]: Started Solr server on port 8983 (pid=20989). Happy searching!
Jun 03 11:37:05 server1 solr[20929]: [14B blob data]
Jun 03 11:37:05 server1 systemd[1]: Started LSB: Controls Apache Solr as a Service.

Creating a Solr search collection:

Using Solr, we can create multiple collections. Run the given command, mention the name of the collection (here gettingstarted) and specify its configurations.

sudo su – solr -c “/opt/solr/bin/solr create -c gettingstarted -n data_driven_schema_configs”

root@server1:/tmp# sudo su - solr -c "/opt/solr/bin/solr create -c gettingstarted -n data_driven_schema_configs"

Copying configuration to new core instance directory:
/var/solr/data/gettingstarted

Creating new core 'gettingstarted' using command:
http://localhost:8983/solr/admin/cores?action=CREATE&name=gettingstarted&instanceDir=gettingstarted

{
 "responseHeader":{
 "status":0,
 "QTime":4427},
 "core":"gettingstarted"}

The new core directory for our first collection has been created. To view the default schema file, got to:

/opt/solr/server/solr/configsets/data_driven_schema_configs/conf

 

Use the Solr Web Interface

The Apache Solr is now accessible on the default port, which is 8983. The admin UI should be accessible at http://your_server_ip:8983/solr. The port should be allowed by your firewall to run the links.

For example:

http://192.168.1.100:8983/solr/

The Solr web interface.

To see the details of the first collection that we created earlier, select the “gettingstarted” collection in the left menu.

Details of our data collection.

After you selected the “gettingstarted” collection, select Documents in the left menu. There you can enter real data in JSON format that will be searchable by Solr. To add more data, copy and paste the following example JSON onto Document field:

{
 "id": 1,
 "book_title": "My First Book",
 "published": 1985,
 "description": "All about Linux"
}

Click on the submit document button after adding the data.

Submit a document to Solr.

Status: success
Response:

{
 "responseHeader": {
 "status": 0,
 "QTime": 189
 }
}

Now we can click on Query on the left side then click on Execute Query,

Execute a query in Solr.

We will see something like this:

{
  "responseHeader":{
    "status":0,
    "QTime":24,
    "params":{
      "q":"*:*",
      "indent":"on",
      "wt":"json",
      "_":"1464947017056"}},
  "response":{"numFound":1,"start":0,"docs":[
      {
        "id":"1",
        "book_title":["My First Book"],
        "published":[1985],
        "description":["All about Linux"],
        "_version_":1536108205792296960}]
  }}

Virtual machine image download of this tutorial

This tutorial is available as ready to use virtual machine image in ovf/ova format for Howtoforge Subscribers. The VM format is compatible with VMWare and Virtualbox. The virtual machine image uses the following login details:

SSH / Shell Login

Username: administrator
Password: howtoforge

This user has sudo rights.

Please change all the above passwords to secure the virtual machine.

Conclusion

After successfully installing the Solr Web Interface on Ubuntu, you can now insert the data or query the data with the Solr API and Web Interface.

 

Copy from: https://www.howtoforge.com/tutorial/how-to-install-and-configure-solr-on-ubuntu-1604/

 
Leave a comment

Posted by on June 28, 2016 in Application Server, Linux, Solr, Ubuntu

 

Handling URL Binding Failures in IIS Express

By Vaidy Gopalakrishnan

 

 

Overview

IIS Express was designed to allow the most common web development and testing tasks to be performed without administrative privileges. For example, you can run a website locally using a non-reserved port. You can also test your website with SSL using a self-signed test certificate and a port in the range 44300 to 44399. See Running IIS Express without Administrative Privileges for details.

However, you might occasionally need to use IIS Express for testing scenarios that are not enabled by default. For example, although IIS Express is not designed to be a production web server like IIS, you might need to test external access to your website. Similarly, you might want to test your site using SSL or using a specific reserved port number.

By default, if you use IIS Express to test these scenarios, it reports a URL binding failure. This occurs because IIS Express does not have sufficient privileges to perform these types of tasks. You can run IIS Express as an administrator to bypass these restrictions, but this is not a good practice for security reasons.

The correct approach to testing with IIS Express in these scenarios is to configure HTTP.sys to allow IIS Express running under standard permissions to perform the tasks. When your testing is complete, you can revert the configuration. For security reasons, these tasks are restricted to administrators and cannot be performed by standard (non-administrator) users.

About HTTP.sys

HTTP.sys is an operating system component that handles HTTP and SSL traffic for both IIS and IIS Express. By default, HTTP.sys prevents applications (including IIS Express) from doing the following operations if the application is run by a standard user:

  • Using reserved ports such as 80 or 443
  • Serving external traffic
  • Using SSL

You can configure HTTP.sys to permit these operations for IIS Express. On Windows 7 and Windows Vista, you can configure HTTP.sys using the netsh.exe utility. On Windows XP, HTTP.sys can be configured using the httpcfg.exe command-line utility, which is included with Windows XP Service Pack 2 Support Tools.

Using a Reserved Port

By default, you can use IIS Express to run your website using a non-reserved port such as 8080. However, using a reserved port such as 80 or 443 requires work. The steps described below assume you want to support local traffic over port 80.

On Windows 7 or Windows Vista, from an elevated command prompt, run the following command:

netsh http add urlacl url=http://localhost:80/ user=everyone 

This command will allow any user’s application (including your own IIS Express instances) to run using port 80 without requiring administrative privileges. To limit this access to yourself, replace “everyone” with your Windows identity.

On Windows XP, you need to run the following command from an elevated command prompt:

httpcfg set urlacl /u http://localhost:80/ /a D:(A;;GX;;;WD) 

After configuring HTTP.sys, you can configure your website to use port 80. This is very straightforward using tools like WebMatrix and Visual Studio 2010 SP1 Beta. You can also manually edit the applicationhost.config file to include the following binding in the sites element.

<binding protocol="http" bindingInformation="*:80:localhost"/>

Your website will now run (locally) using port 80.

When you are done testing your application, you should revert HTTP.sys to its earlier settings.

On Windows 7 or Windows Vista, run the following command from an elevated command prompt:

netsh http delete urlacl url=http://localhost:80/ 

On Windows XP, run the following command from an elevated prompt:

httpcfg delete urlacl /u http://localhost:80/ 

Serving External Traffic

To enable your website to serve external traffic, you need to configure HTTP.sys and your computer’s firewall. The steps described below assume external traffic will be served on port 8080.

The steps for configuring HTTP.sys for external traffic are similar to setting up a site to use a reserved port. On Windows 7 or Windows Vista, from an elevated command prompt, run the following command:

netsh http add urlacl url=http://myhostname:8080/ user=everyone 

On Windows XP, run the following command from an elevated command prompt:

httpcfg set urlacl /u http://myhostname:8080/ /a D:(A;;GX;;;WD) 

After configuring HTTP.sys, you can configure IIS Express to use port 80 by using WebMatrix or Visual Studio 2010 SP1 Beta, or by editing the applicationhost.config file to include the following binding in the sites element. (Replace myhostname with your computer’s domain name).

<binding protocol="http" bindingInformation="*:8080:myhostname"/>

You will also need to configure the firewall to allow external traffic to flow through port 8080. The steps will vary depending on which firewall you are using and aren’t described here.

When you are done testing your application, revert HTTP.sys to its earlier settings.

On Windows 7 or Windows Vista, run the following command from an elevated command prompt:

netsh http delete urlacl url=http://myhostname:8080/

On Windows XP, run the following command from an elevated prompt:

httpcfg delete urlacl /u http://myhostname:8080/ 

Using a Custom SSL Port

If you want to test SSL access to your site, you can do this with IIS Express by using an SSL port between 44300 and 44399 and using the IIS Express self-signed certificate. Trying to use SSL with a port outside this range results in a URL binding failure when your website is launched under IIS Express.

For general instructions on how to configure HTTP.sys to support SSL, see How to: Configure a Port with an SSL Certificate. As an example, imagine that you want to test your website using the URL https://localhost:443.

First, determine the SHA1 thumbprint for the IIS Express self-signed certificate. This thumbprint is different for each computer because the IIS Express setup program generates a new certificate when executed. You can determine the SHA1 thumbprint using the Microsoft Management Console (MMC) Certificate snap-in by looking at the computer’s Personal certificate store. Alternatively, you can use the .NET CertMgr.exe utility as shown below. From a command prompt, run the following command.

certmgr.exe /c /s /r localMachine MY 

This command displays information about all the certificates in the Personal certificate store for the local computer. Search for “IIS Express Development Certificate” in the output to locate the IIS Express self-signed certificate and then note its SHA1 thumbprint.

Next, configure HTTP.sys to associate the self-signed certificate with the URL. On Windows 7 or Windows Vista, start by creating a unique UUID using uuidgen.exe or some other tool. Then run the following command from an elevated prompt, passing the thumbprint to the certhash parameter. (Exclude the spaces when you specify the thumbprint. )

netsh http add sslcert ipport=0.0.0.0:443 certhash=<thumbprint> appid={00112233-4455-6677-8899-AABBCCDDEEFF}

For the appid parameter, pass the unique UUID you created earlier.

On Windows XP, run the following command from an elevated prompt.

httpcfg set ssl -i 0.0.0.0:443 -h <thumbprint>

Since 443 is a reserved port, you will additionally need to configure HTTP.sys to allow IIS Express to use it while running as a standard user. For details, see the Using a Reserved Port section. You won’t need to perform this step if you use a non-reserved custom SSL port such as 44500.

On Windows 7 or Windows Vista, run the following command from an elevated prompt.

netsh http add urlacl url=https://localhost:443/ user=everyone 

On Windows XP, run the following command from an elevated prompt.

httpcfg set urlacl /u https://localhost:443/ /a D:(A;;GX;;;WD) 

After configuring HTTP.sys, configure your website to use the custom SSL port using WebMatrix or Visual Studio 2010 SP1 Beta, or by adding the following binding in the sites element in the applicationhost.config file.

<binding protocol="https" bindingInformation="*:443:localhost"/>

When you are done testing your website, revert HTTP.sys to its earlier settings. On Windows 7 or Windows Vista, run the following commands from an elevated prompt:

netsh http delete sslcert ipport=0.0.0.0:443
netsh http delete urlacl url=https://localhost:443/

On Windows XP, run the following commands from an elevated prompt:

httpcfg delete ssl i 0.0.0.0:443
httpcfg delete urlacl /u https://localhost:443/ 

Using a Custom SSL Certificate

Setting up a custom SSL certificate is very similar to using a custom SSL port. The steps described in this section assume your website is already serving local SSL traffic using port 44300 and the IIS Express self-signed certificate.

First, you need to install the custom SSL certificate on your computer. Use the MMC Certificate snap-in or CertMgr.exe. As you are installing your certificate, note the SHA1 thumbprint value.

The URL https://localhost:44300 is pre-configured by IIS Express setup to use a self-signed certificate. In order to bind this URL to your custom certificate, you will have to delete the existing association. Skip this step if your hostname and port combination is not associated with an SSL certificate.

On Windows 7 or Windows Vista, run the following command from an elevated prompt:

netsh http delete sslcert ipport=0.0.0.0:44300

On Windows XP, run the following command from an elevated prompt:

httpcfg delete ssl i 0.0.0.0:44300

The remaining steps are similar to those for configuring a custom SSL port. Create a unique UUID using uuidgen.exe or some other tool.

On Windows 7 or Windows Vista, run the following command from an elevated prompt, passing your custom certificates’ thumbprint (remove any spaces first) to the certhash parameter and passing your UUID.

netsh http add sslcert ipport=0.0.0.0:44300 certhash=<thumbprint> appid={00112233-4455-6677-8899-AABBCCDDEEFF}

On Windows XP, run the following command from an elevated prompt.

httpcfg set ssl -i 0.0.0.0:44300 -h <thumbprint>

Summary

This article explains the steps required to support some scenarios for IIS Express that aren’t enabled by default. Performing them requires you to be an administrator. Even if you don’t have administrative privileges, you can still perform the most common web design and development tasks with IIS Express as a standard user.

 

 

Copy from: http://www.iis.net/learn/extensions/using-iis-express/handling-url-binding-failures-in-iis-express

 
Leave a comment

Posted by on December 21, 2015 in IIS

 

Getting Started With RabbitMQ in .net

By: Simon Dixon’s Blog

In the previous two examples I built a simple .net application to demonstrate first two sections the RabbitMQ getting started guide in .net. In this post I’ll be looking at the third. Download the Source

3.) Publish/Subscribe

The original article (in Java) is here:http://www.rabbitmq.com/tutorials/tutorial-three-java.html

I’m going to take a slightly different approach to my previous two examples and split the Producer and Consumer into two different Windows Forms. This will allow us to run as many Consumers as we like  and so demonstrate Pub/Sub effectively.

First up is the Producer.

Create a new Form and add the Input TextBox and Button as in the first two examples. Also and a new Button “Start New Consumer” .

Next create the consumer. We only need to output messages so we only need one RichTextBox

In the previous two examples we had pretty much duplicate constructors for both Consumers and Producers. We will now fix this by creating a base class that these can both inherit from. Create a new class called IConnectToRabbitMQ.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
public abstract class IConnectToRabbitMQ : IDisposable
    {
        protected IModel Model { get; set; }
        protected IConnection Connection { get; set; }
        public string Server { get; set; }
        public string ExchangeName{ get; set; }
        public string ExchangeTypeName { get; set; }
        public IConnectToRabbitMQ(string server, string exchange, string exchangeType)
        {
            Server = server;
            Exchange = exchange;
            ExchangeTypeName = exchangeType;
        }
        //Create the connection, Model and Exchange(if one is required)
        public virtual bool ConnectToRabbitMQ()
        {
            try
            {
                var connectionFactory = new ConnectionFactory();
                connectionFactory.HostName = Server;
                Connection = connectionFactory.CreateConnection();
                Model = Connection.CreateModel();
                bool durable = true;
                if (!String.IsNullOrEmpty(Exchange))
                    Model.ExchangeDeclare(Exchange, ExchangeTypeName, durable);
                return true;
            }
            catch (BrokerUnreachableException e)
            {
                return false;
            }
        }
        public void Dispose()
        {
            if (Connection != null)
                Connection.Close();
            if (Model != null)
                Model.Abort();
        }
    }

The class name may look a little odd to most as it begins with an “I”, this is usually the naming convention for an Interface but I’m using what I like to call Simon Says naming convention. I’ll be writing a post about this in the near future. The main gist of it, is I like to have classes tell me what they do. For example, a class which calls a remote service might inherit from a class(or interface) called ICallRemoteServices. So the full class name definition would be FooService : ICallRemoteServices. There would also be a abstract method defined that implements the action e.g CallRemoteService.  Other example are  IAmAnOrder(for a value object), ICalculateShipping, IDeliverEmail etc. This may seem a little weird but I like it 🙂.

So enough of that for now, lets go through the class. First we declare fields to hold the familiar IModeland Connection instances. Next up are fields to store the details of the Server, Exchange andExchangeTypeName.  Exchange is the name of the exchange we want to publish/consume messages from and ExchangeTypeName  holds the type of exchange we want to use(in this example it will be “fanout”).  ExchangeType is set from a constant declared in the RabbitMQ.Client.ExchangeType  class, so for us it will be ExchangeType .Fanout(More on this later.) Next we have the  ConnectToRabbitMQ() method, this is almost exactly the same as the Constructor methods of the Producer/Consumer methods  in my  previous two examples. We have this additional block which declares the Exchange.

1
2
3
bool durable = true;
if (!String.IsNullOrEmpty(ExchangeName))
    Model.ExchangeDeclare(ExchangeName, ExchangeTypeName, durable);

We are declaring a durable exchange of the type ExchangeTypeName with the name ExchangeName.  If this exchange had already been declared by another Producer or Consumer  a new one is not created, the existing one will be used.

Now we’ll write our Producer.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
public class Producer : IConnectToRabbitMQ
    {
        public Producer(string server, string exchange, string exchangeType) : base(server, exchange, exchangeType)
        {
        }
        public void SendMessage(byte[] message)
        {
            IBasicProperties basicProperties = Model.CreateBasicProperties();
            basicProperties.SetPersistent(true);
            Model.BasicPublish(ExchangeName, "", basicProperties, message);
        }
    }
}

Here we a have a nice lightweight publisher, the only difference from our previous examples is we are publishing to a named exchange called ExchangeName. We do not know about or use a Queue.

Then it’s our consumer, this is slightly more complicated.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
public class Consumer : IConnectToRabbitMQ
    {
        protected bool isConsuming;
        protected string QueueName;
        // used to pass messages back to UI for processing
        public delegate void onReceiveMessage(byte[] message);
        public event onReceiveMessage onMessageReceived;
        public Consumer(string server, string exchange, string exchangeType) : base(server, exchange, exchangeType)
        {
        }
        //internal delegate to run the consuming queue on a seperate thread
        private delegate void ConsumeDelegate();
        public void StartConsuming()
        {
                Model.BasicQos(0, 1, false);
                QueueName = Model.QueueDeclare();
                Model.QueueBind(QueueName, ExchangeName, "");
                isConsuming = true;
                ConsumeDelegate c = new ConsumeDelegate(Consume);
                c.BeginInvoke(null, null);
        }
        protected Subscription mSubscription { get; set; }
        private void Consume()
        {
            bool autoAck = false;
            //create a subscription
            mSubscription = new Subscription(Model, QueueName, autoAck);
            while (isConsuming)
            {
                BasicDeliverEventArgs e = mSubscription.Next();
                byte[] body = e.Body;
                onMessageReceived(body);
                mSubscription.Ack(e);
            }
        }
        public void Dispose()
        {
            isConsuming = false;
            base.Dispose();
        }
    }

We need to store the name of our Queue that we will be binding to the exchange so we have a field QueueName for this purpose. The next code of interest is the StartConsuming() method. Most of this is familiar with this additional block:

1
2
QueueName = Model.QueueDeclare();
Model.QueueBind(QueueName, ExchangeName, "");

What we are doing here is asking the model to declare a temporary queue for us and give it a random unique name(stored in QueueName), we then bind this queue to the Exchange called ExchangeName. 

This a key concept to exchanges in RabbitMQ, a publisher/producer only knows about the Exchange, it will publish messages directly to the Exchange and has no concept of a queue. Each consumer knows about the Exchange but they will also have a queue that is bound to the Exchange.  The way I look at it is the one or more Producers own an Exchange(and publish to it) and each Consumer owns a Queue(which is bound to an Exchange.)

The Consume() method is very different to what we have seen before(and the Java Example). Instead of using a  QueueingBasicConsumer we are using a Subscription. Subscription is part of theRabbitMQ.Client.MessagePatterns package in the .net client Library. It give us a nice wrapper to the boilerplate message de-queuing code. More info is here.

1
2
3
4
5
mSubscription = new Subscription(Model, QueueName, autoAck);
  .....
  BasicDeliverEventArgs e = mSubscription.Next();
  .....
  mSubscription.Ack(e);

Now we need to add the code for our Producer Form

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
public string HOST_NAME = "localhost";
 public string EXCHANGE_NAME = "logs";
 private Producer producer;
 //delegate to show messages on the UI thread
 private delegate void showMessageDelegate(string message);
 public PubSub_Producer()
 {
     InitializeComponent();
     //Declare the producer
     producer = new Producer(HOST_NAME, EXCHANGE_NAME, ExchangeType.Fanout);
     //connect to RabbitMQ
     if(!producer.ConnectToRabbitMQ())
     {
         //Show a basic error if we fail
         MessageBox.Show("Could not connect to Broker");
     }
 }
 private int count = 0;
 private void button1_Click(object sender, EventArgs e)
 {
     string message = String.Format("{0} - {1}", count++, textBox1.Text);
     producer.SendMessage(System.Text.Encoding.UTF8.GetBytes(message));
 }
 private void button2_Click(object sender, EventArgs e)
 {
     //Open a new Consumer Form
     PubSub_Consumer consumer = new PubSub_Consumer();
     consumer.Show();
 }

This should be fairly self explanatory. The producer.ConnectToRabbitMQ() call is handled in the base IConnectToRabbitMQ class. We’ve added little error handling code just in case the broker is unavailable(if it is run rabbitmq-server -detached from the command line .) There’s also a method to handle clicks on the “Start New Consumer” Button which spawns a new Consumer Form.

Then we have our Consumer Form.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
public partial class PubSub_Consumer : Form
    {
        public string HOST_NAME = "localhost";
        public string EXCHANGE_NAME = "logs";
        private Consumer consumer;
        public PubSub_Consumer()
        {
            InitializeComponent();
            //create the consumer
            consumer = new Consumer(HOST_NAME, EXCHANGE_NAME, ExchangeType.Fanout);
            //connect to RabbitMQ
            if (!consumer.ConnectToRabbitMQ())
            {
                //Show a basic error if we fail
                MessageBox.Show("Could not connect to Broker");
            }
            //Register for message event
            consumer.onMessageReceived += handleMessage;
            //Start consuming
            consumer.StartConsuming();
        }
        //delegate to post to UI thread
        private delegate void showMessageDelegate(string message);
        //Callback for message receive
        public void handleMessage(byte[] message)
        {
            showMessageDelegate s = new showMessageDelegate(richTextBox1.AppendText);
            this.Invoke(s, System.Text.Encoding.UTF8.GetString(message) + Environment.NewLine);
        }
    }

This is exactly the same as previous Consumer examples with the additional call to the base class.

Now we can run the project after making sure the correct Form is opened on starup

1
2
3
4
5
6
7
[STAThread]
     static void Main()
     {
         Application.EnableVisualStyles();
         Application.SetCompatibleTextRenderingDefault(false);
         Application.Run(new PubSub_Producer());
     }

Click the “Start New Consumer” Button a couple of time to get a few consumers running, then put  your message in the “Producer Input”  TextBox and hit send. You should see the message appear in all the Consumer output windows. Good stuff 🙂

Summary

What we have done here is create a Fanout Exchange named “logs”,  we’ve created some Consumers(three in my example above) each with their own unique temporary queue bound to the exchange. We have then published a message to the exchange using our Producer, the exchange then routes the message to all bound queues which in turn delivers it to the Consumers. Download the Source

 

Copy from: https://simonwdixon.wordpress.com/2011/05/19/getting-started-with-rabbitmq-in-net-%E2%80%93-part-3/

 
Leave a comment

Posted by on November 18, 2015 in RabbitMQ

 

Ubuntu Server Setup Guide for Django Websites

By: Brent O’Connor.

This guide is a walk-through on how to setup Ubuntu Server for hosting Django websites. The Django stack that will be used in this guide is Ubuntu, Nginx, Gunicorn and Postgres. This stack was chosen solely from the reading I’ve done and talking to other Django developers in order to get their recommendations. This stack seems to be one of the latest “standard” stacks for Django deployment. This guide also assumes that you’re familiar with Ubuntu server administration and Django. I needed an example site for this guide so I chose to use my Django Base Site which is available on Github.

I would also like to thank Ben Claar, Adam Fast, Jeff Triplett and Frank Wiles for their suggestions and input on this guide.

Step 1: Install Ubuntu Server

The version of Ubuntu I’m using for this guide is Ubuntu 11.10 64 bit Server. I’ve installed Ubuntu Server in a VirtualBox VM on my MacBook Pro which is currently running Mac OS X 10.7.2. During the installation of Ubuntu Server I answered the prompts with the following:

Language: English
Install Menu: Install Ubuntu Server
Select a language: English
Select your location: United States
Configure the Keyboard: No
Configure the keyboard: English (US)
Configure the keyboard: English (US)
Hostname: ubuntu-vm
Configure the clock: Yes
Partition disks: Guided - use entire disk and set up LVM
Partition disks: SCSI3 (0,0,0) (sda) - 21.5 GB ATA VBOX HARDDISK
Partition disks: Yes
Partition disks: Continue
Partition disks: Yes
Set up users and passwords: Brent O'Connor
Set up users and passwords: (Enter a username)
Set up users and passwords: ********
Set up users and passwords: ********
Set up users and passwords: No
Configure the package manager: <blank>
Configure taskse1: No automatic updates
Software selection: <Continue>
Install the GRUB boot loader on a hard disk: Yes
Installation complete: <Continue>

Step 2: Setup Port Forwarding

Under the settings for your VM in VirtualBox click on the “Network” tab and then click on the “Port Forwarding” button. Now click on the plus and add the following settings to setup port forwarding for web and ssh.

Name Protocol Host IP Host Port Guest IP Guest Port
SSH TCP 2222 22
Web TCP 8080 80

Step 3: Install Software

Before you begin it might be a good idea to update your system clock:

$ sudo ntpdate time.nist.gov

Download lists of new/upgradable packages:

$ sudo aptitude update

OpenSSH

Since I like to connect to my servers using SSH the first thing I install is openssh-server:

$ sudo aptitude install openssh-server

Since you setup port forwarding in step 2, you should now be able to open up your Terminal and connect to your Ubuntu Server using the following:

$ ssh localhost -p 2222

Python Header Files

The Python header files are needed in order to compile binding libraries like psycopg2.

$ sudo aptitude install python2.7-dev

PostgreSQL

$ sudo aptitude install postgresql postgresql-server-dev-9.1

Make your Ubuntu user a PostgreSQL superuser:

$ sudo su - postgres
$ createuser --superuser <your username>
$ exit

Restart PostgreSQL:

$ sudo /etc/init.d/postgresql restart

Nginx

$ sudo aptitude install nginx

Git

$ sudo aptitude install git

Step 4: Setup a Generic Deploy User

The reason we are setting up a generic deploy user is so that if you have multiple developers who are allowed to do deployments you can easily add the developer’s SSH public key to the deploy user’s/home/deploy/.ssh/authorized_keys file in order to allow them to do deployments.

$ sudo useradd -d /home/deploy -m -s /bin/bash deploy

Step 5: Install an Example Site

Setup a virtualenv:

$ sudo apt-get install python-setuptools
$ sudo easy_install pip virtualenv
$ cd /usr/local/
$ sudo mkdir virtualenvs
$ sudo chown deploy:deploy virtualenvs
$ sudo su deploy
$ cd virtualenvs
$ virtualenv --no-site-packages example-site
$ exit

Note

I personally use and setup virtualenvwrapper on all my servers and local development machines so that I can use workon <virtualenv> to easily activate a virtualenv. This is why I put all my virtualenvs in /usr/local/virtualenvs.

Make a location for the example site:

$ cd /srv/
$ sudo mkdir sites
$ sudo chown deploy:deploy sites
$ sudo su deploy
$ cd sites
$ git clone git://github.com/epicserve/django-base-site.git example-site
$ cd example-site/
$ git checkout -b example_site 5b05e2dbe5
$ echo `pwd` > /usr/local/virtualenvs/example-site/lib/python2.7/site-packages/django_project_root.pth
$ mkdir -p static/cache
$ exit
$ sudo chown www-data:www-data /srv/sites/example-site/static/cache
$ sudo su deploy

Create the file /srv/sites/example-site/config/settings/local.py and add the following. Make sure to change the password and then save the file. I usually use a random string generator to generate a new password for each new Postgresql database and user:

from base import *

LOCAL_SETTINGS_LOADED = True

DEBUG = True

INTERNAL_IPS = ('127.0.0.1', )

ADMINS = (
    ('Your Name', 'username@example.com'),
)

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.postgresql_psycopg2',
        'NAME': 'example_site',
        'USER': 'example_site',
        'PASSWORD': '<enter a new secure password>',
        'HOST': 'localhost',
    }
}

Install the sites required python packages:

$ source /usr/local/virtualenvs/example-site/bin/activate
$ cd /srv/sites/example-site/
$ pip install -r config/requirements/production.txt

Create a PostgreSQL user and database for your example-site:

# exit out of the deploy user account
$ exit
$ createuser example_site -P
$ Enter password for new role: [enter the same password you used in the local.py file from above]
$ Enter it again: [enter the password again]
$ Shall the new role be a superuser? (y/n) n
$ Shall the new role be allowed to create databases? (y/n) y
$ Shall the new role be allowed to create more new roles? (y/n) n
$ createdb example_site -O example_site

Step 6: Daemonize Gunicorn using Ubuntu’s Upstart

Create your Upstart configuration file:

$ sudo vi /etc/init/gunicorn_example-site.conf

Add the following and save the file:

description "upstart configuration for gunicorn example-site"

start on net-device-up
stop on shutdown

respawn

exec /usr/local/virtualenvs/example-site/bin/gunicorn_django -u www-data -c /srv/sites/example-site/config/gunicorn/example-site.py /srv/sites/example-site/config/settings/__init__.py

Start the gunicorn site:

$ sudo start gunicorn_example-site

Step 7: Setup Nginx to proxy to your new example site

Create a new file sudo vi /etc/nginx/sites-available/example-site.conf and add the following to the contents of the file:

server {

    listen       80;
    server_name  localhost;
    access_log   /var/log/nginx/example-site.access.log;
    error_log    /var/log/nginx/example-site.error.log;

    location = /biconcave {
        return  404;
    }

    location  /static/ {
        root  /srv/sites/example-site/;
    }

    location  /media/ {
        root  /srv/sites/example-site/;
    }


    location  / {
        proxy_pass            http://127.0.0.1:8000/;
        proxy_redirect        off;
        proxy_set_header      Host             $host;
        proxy_set_header      X-Real-IP        $remote_addr;
        proxy_set_header      X-Forwarded-For  $proxy_add_x_forwarded_for;
        client_max_body_size  10m;
    }

}

Enable the new site:

$ cd /etc/nginx/sites-enabled
$ sudo rm default
$ sudo ln -s ../sites-available/example-site.conf

Start nginx:

$ sudo /etc/init.d/nginx start

Step 8: Test the new example site

While still connected to your Ubuntu server via SSH run the following, which should spit out the HTML for your site:

wget -qO- 127.0.0.1:80

Since you setup port forwarding in step 2 for web, you should also be able to open up your browser on your local host machine and pull up the website using the URL, http://127.0.0.1:8080.

Copy from: http://epicserve-docs.readthedocs.org/en/latest/django/ubuntu-server-django-guide.html

 
Leave a comment

Posted by on November 4, 2015 in Django, Nginx, Python