Implementing SSL on your server is relatively straight forward. Its funny how things always seem relatively straight forward after you finally get it implemented. Apahce2, in Debian, starts with the ports.conf file. Add NameVirtualHost *:80 to support regular virtual hosts, and also a *:443.
Enable mod ssl:
sudo a2enmod ssl
Then for your regular virtual hosts change:
Without ssl you need not distinguish between ports and can just use a wildcard (*).
In the ssl virtual host block add the parameters to reference the certs and keys and the document root:
For this website the document root is the same that is defined in the non ssl virtual host block. The cert was created using StartSSL and is just a regular cert. The EV certs are pretty interesting being that they turn the address bar green in modern browsers; they cost some money though and required a business entity. While this entire website can be accessed using ssl, there are many links in the code that point only to http, which break ssl encryption requirements. The admin area, on the other hand, is fully ssl encrypted. You can force wordpress to use ssl for the entire admin area using (in your wp-config.php file):
Place the above code underneath the already existing ‘define’ parameters. Now when you navigate to wp-admin it will automatically load the login page as https. Just remember to configure port forwarding of 443 to your server. You should also implement a plugin called user locker. Yes the admin area may be encrypted, but this does not handle brute force attacks. User locker is kind of like fail2ban at the server level. I recommend fail2ban use always!! Especially to handle ssh brute force attacks on systems that require password login to be enabled.
To start, Apache2 cannot process multiple ssl virtualhosts. It can handle multiple regular virtualhosts, and a single ssl host.
Name-based virtual hosting cannot be used with SSL secure servers because of the nature of the SSL protocol. 
I plan to implement a store to sell various items, and I want to make people feel comfortable. To feel comfortable the visitor needs to see the lock icon in their web browser, and maybe a third party certification so the title bar becomes green in newer browsers.
SSH can create forward and reverse tunnels to transmit port data through the encrypted connection. You can create forward tunnels that link the local port of the client machine to the remote port on the server. You can create reverse tunnels that link the remote ports of the server to the local ports on the client. Forward tunnels are created using the -L option. Reverse ports are configured using the -R option. The forward and reverse tunnels work fine when working locally on the client machine, but if you want other hosts, at the client location, to connect to the tunnels you need to enable the gateway ports feature of ssh.
Gateway ports does not work with forward and reverse tunnels on Ubuntu 8.04, the long term support branch. The openssh version included in the repositories does not allow reverse tunnels, and forward tunnels combined, with the -G (gatewayports) option enabled. Forward tunnels do work with the gateway ports feature. Theoretically the option does try and work, but when you operate ssh using -VV you will see that during the connection process the reverse tunnel fails to create.
I think this is an old bug that may have been addressed. I do see posts about successes, therefore I’m inclined to think that new versions of openssh have addressed the bug.
Instead of installing a newer version of ssh I just implemented an alternative for hosts at the client location. I wanted to use a single computer as a gateway to ports located on a remote server. It would have been nice to consolidate all communications to a single ssh connection.
I ended up using http://haanstra.eu/putty/. Each workstation, at the client location, will establish a direct ssh connection bypassing the gateway ports feature. There will be many ssh connections, but this is all behind the scenes stuff that the employees will never see, and it will provided the needed functionality.
Both the master and slave servers reside on a host machine. Clearly the servers are virtual machines. Its strategic to use virtual machines because the host machine remains useful for a variety of other tasks. The virtual server compartmentalize the functions to a discretely backupable file. In my particular arrangement the master server is synchronized with the slave server. The slave server has the same applications installed as the master. A fully functional apache web server with php and mysql support. The mysql database is synchronized as a replicate in real time, and rsync is used to sync the apache web root. In the rsync function I skip the wordpress and wiki config files, in which I reference the mysql server on the slave machine. Doing so allows server2 to be a drop in replacement for the master just by switching the port forwarding settings in the router. I would have to reconfigure my.cnf as the master in such a transplanation. The mail server has spamassassin and clamav installed on master and slave and /var/mail are synchronized to keep a relatively real time backup.
Then once a day the slave server is backed up. Vboxtool handled this seamlessly. Vboxtool stops the virtual machine, transfers it, and automatically restarts it. The initial backup is done to backup1. Then rsync is used to backup the home folder on my host system. I also use the computer for my own personal things, which are effectively backed up to backup1. Then, subsequently, the host initiates an rsync of backup1 to backup2, then to backup3. I have a rsync duplicate of backup1 on backup2 and backup3.
The host also is a MythTV server, and backup3 is the client that is subsequently connected to the TV. I have my various hosts relatively diversified and multitasked.
I have 4 system in my house. The primary I label serverhost. On the serverhost the /etc/hostnames file is the same as the hostnames file on the other machines. The hostnames file is formatted accordingly:
192.168.1.100 serverhost.bgevolution.com serverhost
192.168.1.101 server.bgevolution.com server
192.168.0.102 tv.bgevolution.com tv
192.168.0.103 server2.bgevolution.com server2
Server 2 is new. I can further cleanup the schema by changing the static address of tv to 103 and server2 to 102. But for now its ok. Now I can reference hosts, with the respective ip address, simply by hostname. This is convenient for setting permissions. Permissions can be a pain in the neck, and if you are referencing a host by ip, localhost, and or the hostname you will have to create permissions for each. For example your mysql server will need permissions for each hostname that you want to connect to it with. Connecting to the server from the serverhost I need permissions for email@example.com. But if I am connecting to mysql through phpmyadmin I need permissions for firstname.lastname@example.org.
I have replicated the mysql server on server to server2, which I connect to with the phpmyadmin installation on server. On server2 I must give permissions to email@example.com to be able to connect from the server. Ultimately with no experience dealing with permissions it is a pain, but after getting everything running you will realize that it makes sense, and it does increase security.
I want the slave server to also behave as the master, and ultimately I want to experiment with setting up a round robin mysql server, meaning as http requests come in to the website, I want it to randomly access database information from either server or server2. This should prove fun.
I’ve said for year VirtualBox was good stuff. I’ve used it since it was previously owned directly by Innotek. Now that Sun has purchase the relatively small European company, the software has exploded with new features and stability. Years ago usb support was in its infancy. Now its robust. Years ago processor support was limited, now its fully SMP enabled. I like the nested paging option reducing ram overhead, and integrating guests into, rather than partitioning, the host. A while back there was problems with the host processor no syncing completely with the guest, causing load when no guest load was present. This was resolved a long time ago, and now running multiple machines only causes load on the lost when load is indeed in the guest. My system has run stably for a long time now, only requiring host reboots when new updates are available.
How can paid system compete with free? I can understand in a Corporate environment commercial support may be needed. In such an application I would imagine Sun provides commercial level support for their VirtualBox product. This is indeed the case for the paid systems of Vmware, Parallels, and others. But ultimately if in house tech support is up to the task VirtualBox is a free product. This also allows for people to get their hands on it for learning purposes. I have learned VirtualBox by working it for years. I am very much discouraged to learn many software packages because I would have to pay for it. I will not be learning Windows 7 for this reason.
San Francisco – Desktop virtualization is one of those technologies that confound the experts. Just when you think you’ve got it figured out, along comes some interloping development to upset the apple cart. Most recently, that role has fallen to Sun’s VirtualBox, the plucky open source VM solution that’s quickly gobbling up the general-purpose desktop virtualization space left vacant by Microsoft and VMware. Users from the three major platforms — Windows, Mac OS X, and Linux — are flocking to VirtualBox for its scalability, robust networking, and bargain price point (it’s free).[Source]