Thursday, December 24, 2009

Day 4 tests

Yet some more tests. It would seem that performance has stabilized:

ISP Performance Testing Day 3

And here is another data point:








All in all another very reasonable and 'to be expected' level of performance.

VDI Interesting Reads

I'm posting this short blog entry simply to have a quick future reference to a couple of interesting documents I gathered about VDI architecting:



Tuesday, December 22, 2009

Day 2 testing

Did this test at night again. I suspect another problem because a download of Google Chrome for Mac was going slow. The test went remarkably well tho.

Test 2 for Today


And another test a little later at night. Now it seems closer to what it should be.


Monday, December 21, 2009

New Place, new Internet Connection

So I've moved and I had to get a new connection, I stuck with what I knew (Telenet) because I wanted the 20Mbit and 60GB monthly cap, but I'm seeing very inconsistent bandwidth and speeds... So much so that I'm starting to record my finding here now as evidence.
Times are recorded in GMT so it's +1 hour to get the actual time the measurement was done

Here's a first measurement:










Saturday, December 12, 2009

Google Chrome OS and the Microsoft EC case re: browser integration

Hmm, working with the early developer preview of Chrome OS recently got me thinking how this whole rigamarole with Microsoft and the EC regarding the "tie-in" of Internet Explorer in Windows is a farce (or will be, when we look back on it 5 years from now). Microsoft is being penalized and told to 'jump through a few hoops' by making other browsers equally available) because the inclusion of a browser as part of the OS is unfair competition...
I'm not denying that this action on Microsoft's part has hurt other browser makers. The point I'm making is that developing on an OS that is driven by 1 company, could turn out to be a hazardous business if the company owning the OS decides to rewrite the rules. Since they own the OS, you could argue that it's within their right to do so.

Now in that light, what are we to make of Google Chrome OS? Talk about shutting out the competition, there is no (easy) way to use another browser on that OS (or any other application for that matter). Quite clever of course, because no one's ever going to (be able to) complain that you cannot use a browser of your choosing on it and that it is unfair competition. Since its a free Open Sourced OS, there is no competition, so no unfair practices here. In fact, Google is even tapping into the large pool of OS programmers (and apparently some of them related to Ubuntu) to help them speed up development of the OS.

I'm sure that Chrome OS will carve out a nice piece of the Netbook market once it's ready for prime time, if the initial publicity and buzz on the Net are anything to go by. The current early releases already show amazingly fast boot times and the Chrome browser, well its pretty fast too.
So what then if Chrome OS carves out a big enough piece of the ever growing Netbook market and becomes a secondary defacto standard OS? Or can we even call it an OS? In essence it is Just enough OS to support their Chrome browser. Would we see a renewed discussion on unfair practices, this time starring Google??

Interesting times ahead indeed...

Saturday, December 05, 2009

Syncing your Snow Leopard Mac with Amazon S3

First a bit of history, I have had an Amazon S3 account for a while now, and as per another blog entry, I've been trying to put it to good use in concert with my MacBook. In the past I've tried to get MacFUSE with s3fs tools to work, however that wasn't so easy and haven't found any evidence on the web that anyone has been successful in getting this to work (unless they're not telling of course).

I did find some people having success with command line tools such as s3sync (a Ruby based script) and s3cmd, which is also ruby based.

The article that got me going on this solution is found here.


 

What does it do?

Ideally what I wanted is mount an S3 bucket as a drive that is available as an icon in the finder. Much like what you get when you have a MobileMe account and get iDisk. The solution presented here does not give the same functionality as that, but its close.

What this solution does is sync a folder of your choice to your S3 bucket of choice. The way it does this is using an upload script that syncs the content of your local folder of choice (this is a shell script that calls the s3sync tool). This script is ran by Launchd (based on a proper plist file to configure it) whenever it sees a change is made in your folder of choice.

Unfortunately this does not take into account any changes made in nested folders…

At the moment the only way to sync the whole folder is triggering it by putting a file in your folder of choice which you can then later be deleted. Another solution might be to have a script being run every 5 mins or so, based on whether there is network connectivity. On Linux it could be made more sophisticated with the use of inotywait. Apparently on Mac OS there is an equivalent going by the name kqueue, but I have yet to find out how to work this in a shellscript (if even possible).

Until I've found a better way, this is the way I do an automated full recursive folder sync.


 

Prerequisites

  1. An Amazon S3 account, which once create will give you an access ID and a secret key you need in the programs below. If you don't have one, you can apply for it here.
  2. A copy of s3sync, get it here and also take the time to read the README file.


 

Optionally

  1. s3cmd which you can get here. This command line utility is handy to check your work as you progress.
  2. Apple's Property List Editor (comes with the XCode developer tools, which you find on your installation DVD). This come in handy to create a plist config file for LAUNCHD. You can also get a shareware copy of a similar program here.


 

The optional tools are not required to get this solution going, but they do come in handy when debugging your work and for creating/editing plist files easily (this can also be done using a standard command line texteditor such as nano or pico).


 

Ok, so how do we put all this stuff together?

First of all you need to do all this on the command line, which you get by running the Terminal application (found in your Utilities folder which is in your Application folder).


 

Step 1; Setting up s3sync

First of all you need to download the tool. I have installed the tool in my home directory.

$ wget http://s3.amazonaws.com/ServEdge_pub/s3sync/s3sync.tar.gz

$ tar xvzf s3sync.tar.gz

The 2 commands above will create a directory called s3sync. You can now clean up (remove the zip file you downloaded) like so:

$ rm s3sync.tar.gz


 

The following 2 steps are optional, if you want your up/downloads to be encrypted through SSL. You'll need to download some certificates for SSL to work. First create a directory in your s3sync directory to store the certificates, like so:

$ cd s3sync

$ mkdir certs

$ cd certs

$ wget http://mirbsd.mirsolutions.de/cvs.cgi/~checkout~/src/etc/ssl.certs.shar


 

The run the following commands

$ sh ssl.certs.shar

$ cd ..

These command install the certs and get you back into the s3sync directory.


 

Right now would be a good idea to create the folder you are going to sync. I created one in my home folder that I called "backup".

Also create a bucket on Amazon S3 and in that bucket, create a directory that you'll use as a target for the syncing. Name that directory the same as your folder of choice (of course, no spaces in the folder name).


 

Next you need to create two shell scripts.


 

The first can be called upload.sh (or whatever you prefer) with the following content:

#!/bin/bash 

# script to upload local directory upto s3 

cd /path/to/yourshellscript/ 

export AWS_ACCESS_KEY_ID=yourS3accesskey export AWS_SECRET_ACCESS_KEY=yourS3secretkey export SSL_CERT_DIR=/your/path/to/s3sync/certs 

ruby s3sync.rb -r –ssl --no-md5 --delete ~/backup/ syncbucket:backup

# copy and modify line above for each additional folder to be synced


 

The second script can be called download.sh and is the same as the script you created above, so do a cp upload.sh download.sh and simply change the last line as follows:

ruby s3sync.rb -r –ssl --no-md5 –delete –make-dir syncbucket:backup ~/ 


 

So the only difference being the source and destination of the sync having been swapped.


 

The final thing to do for security (don't want anyone to peek at your Amazon key and secret):

$ chmod 700 upload.sh 

$ chmod 700 download.sh


 

This ensures that only you (your account rather) can read these files.


 


 

Step 2; Creating a Launchd job to keep your directory synced

This part of the solution was inspired by a rather old, but still relatively useful posting here that explains the workings of Launchd and whose example is exactly what we need for this solution.


 

Basically we'll need to create a .plist file that we will place in a folder that may not yet exist:

Your home folder/Library/LaunchAgents


 

If it doesn't, create it now.


 

Then we create the .plist file in this directory as follows. You can use nano for instance and name it your.domain.s3sync.plist

<?xml version="1.0" encoding="UTF-8"?>

<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">

<plist version="1.0">

<dict>

<key>Label</key>

<string>your.domain.s3sync</string>

<key>ProgramArguments</key>

<array>

<string>/Users/youraccount/s3sync/upload.sh</string>

</array>

<key>WatchPaths</key>

<array>

<string>/Users/youraccount/backup</string>

</array>

</dict>

</plist>


 

The last thing you need to do is tell launchd to load this script and go watch that folder of choice. This is done like so:


 

$ launchctl load ~/Library/LaunchAgents/your.domain.s3sync.plist


 

And you're all set!


 

Improvements


 

  1. Recursive triggering of the script
  2. More secure setting of the environment variables (currently contained in the script, which is only readable by your user account)


 


 


 

Monday, June 29, 2009

Cloud Computing Review

Not my review, just linking to this article on Techtarget's new Cloud Computing site. The article has an interview with David Malan a Computer Science lecturer from Harvard making some fairly "kicking in open doors" kind of remarks. Amazon's AWS in the 'oldest' game in town and as a result has evolved substantially.
A comparison is made with Microsoft's Azure and Google's App Engine which in my humble opinion is comparing apples and oranges. As it stands, Amazon's service is more an Infrastructure as a Service, where is Microsoft and Google's respective offerings are more Platforms as a Service. They provide environment in which you can start coding immediately and not have to deal with the mechanics of individual server instances that have to be prep'ed with the dev platform of your choosing (the Amazon approach). Yes, you don't have to start completely from scratch, you can build on someone else's image, if it would suite your way of work...
Also the article, as far as I've been able to establish, contains an inaccuracy. It states that you can choose machine images with Windows 2008 Server, however there are currently no Windows 2008 images available on Amazon.

Monday, June 08, 2009

Amazon S3 Cloud Storage on Leopard 10.5.7; a European War Tale

A little while ago I've subscribed to Amazons S3 cloud storage service. Up to now I've been using it sparingly and the main interface to it has been S3Fox, a Firefox extension. This has actually worked reasonably well, even though its not necessarily the most intuitive interface, its still fairly easy to understand (for those that have been around long enough, think Norton Commander style). One of the great advantages for me has been that through virtue of the fact that Firefox is multi-platform, so is access to S3 through S3Fox. An important fact for me, since I use Linux, Mac OS X and Windows interchangeably.

Today I decided I wanted a better, more intuitive way to interact with S3 starting with my Mac (which I use most often). There were a number of items needed:
1. MacFUSE; a Filesystem User Space Environment
2. S3FS from Google Code, which needs to be compiled.

The whole process is all described here, so I won't repeat that.

Once you have installed MacFUSE and compiled the s3fs program, you are ready to mount your own S3 bucket and start filling it up with either backups (rsync'ing it for instance) or just as an extra HD. You could even mount multiple buckets.

The picture below shows what it looks like.










One point of note; I was unsuccessful trying to mount buckets located in Europe. It gave me a 301 error and it is unclear why. I saw a discussion in the Amazon AWS dev forum, but there was no answer from Amazon even...

So for now, use only US buckets and this should work.

UPDATE:
It turns out it doesn't work... Eventhough the S3 bucket gets loaded, it doesn't accept any files, nor can we create directories. So far I've been unable to determine the cause of this problem. It seems that there is a paid version of S3fs but the pricing ($129) seems disproportionate to its utility... the forums for s3fs also seem void of clear guidance on what the problem could be. So far I've been able to determine that there is an issue with requests to Amazon S3 that this program generates. What I don't understand is that not more people have complained about this so I'm guessing that there may be a version issue with underlying libraries being used by S3fs that only I (and 2 others seem to be experiencing)...

Wednesday, May 06, 2009

Bandwidth Testing part 2

Continued on with my bandwidth testing, these are the results after having restarted the cable modem (which supposedly updated its firmware) as instructed by my ISP's support folks. Below are the results:



And below the results from Speedtest.net whom btw make use of the same software but a different server infrastructure I'm sure. The interesting thing is that their results are actually better than those measured solely on Telenet's network (although only marginally so).


The above results are measured on an Asus EEE PC 900 (with built-in wifi of the 802.11b/g denomination).

The results below here are measured on a Macbook running Mac OS X 10.5.6 and a built-in Airport Extreme of the 802.11n (draft) denomination...



And again using the ISP's meter:


Ok so clearly the Asus EEE PC 900's wifi is the bottleneck in these tests and the Macbook is able to use bandwidth all the way up to the ISP's limits...

I wonder if my abhorrent download results earlier were a function of limits at the server I was downloading from or if there was something else limiting going on... hard to troubleshoot these transient type issues.

Anyhow, back to report these results to the support folks @ my ISP....

To be continued....

Tuesday, May 05, 2009

Bandwidth Testing

I have been experiencing strange delays with my Internet download speeds lately that have started to bug me so much that I decided to report the issue to my ISP and see if they are able to help resolve this.

I have a so called Turbonet subscription which entitles me (according to Telenet's) latest being adertized on their website to 25Mbps download speed. Originally was sold to me as 20Mbps tho.

The main thing I've noticed is download speeds have become irratic and slow at times.

It so happens that a little while go a started measuring my dl speeds and at first I got this:


The download seems fairly close to what is to be expected, the upload speed tho seems only half of what is to be expected.

The second measurement (which was actually earlier this evening (on a Windows 7 machine):



This shows less then half of what its supposed to be...

I then later did a second test from my Ubuntu Netbook and got the following result:



Very similar to the results earlier from the Windows machine so I'm ruling out any computer related problems.

Finally the result from my ISP's bandwidth meter:


So there my upload speed dropped again, otherwise its close to the other 2 results of this evening.

Now I'm supposed to reset my router, which I will do tomorrow. This reset is supposed to load the router with its latest firmware according to my ISP.

I'll retest tomorrow, to be continued....

Wednesday, April 01, 2009

Advances in Virtualization

I have been reading various interesting articles on various blogs n vendor websites about the oncoming onslaught of new technologies in the virtualization hard and software space. A lot of the software side was revealed by virtualization leader VMware during VMworld Europe and on top of that we get Cisco's announcement around their UCS offering.
An interesting dicussion by Scott Lowe, about some of the technologies involved can be found here. It is indeed more profound then Cisco simply entering the bladeserver market.
It's essentially the birth of a next-gen virtualization eco-system that takes virtualization to the next level. Of course we're only just standing at the virtual craddle of this new baby, but given how the various developments are coalescing, it's quite clear that this may in fact be a new-wave of virtualization, in bringing the infrastructure for Compute, Storage and network to new optimal performance levels, while promising to ease the burden of management. Now we see how the traditional server vendors react...

-- Post From My iPhone

Wednesday, March 04, 2009

Next Generation Virtualization Management

There's a dilemma:
- efficieny
- speed of change
and opposite
- control
- manageablility


Virtualization and Automation helps.
Virtualization both introduces new elements to be managed as well as












The overall sol'n should be SLA driven:
- availability
- security
- performance
It should be utility like.

It consists of the VDC- OS and the management layer.

The VDC-OS has already built-in management like VMotion and DRS that ease the amount of mngmt humans have to do.





vCenter becomes the centralized management platform of the virtual dayaventer. Aside frm operational management tasks it also needs to provide the means to track how we're doing against SLA.





On the other hand the user becomes enabled.
Lab manager will serve as the means for this in the future.





Chargeback will also be provide supporting various models.

Even for those not ready to cross charge internally it may serve as a way to create awareness of the cost of usage of resources.

Life Cycle manager is key to automate provisioning and tracking changes throughout a VM's life right through to decommissioning.





This product tracks configs of servers.

For capacity management there will be Capacity IQ




It can predict when you'll run out of capacity and it identifies overprovisioned VM's.

In order to scale-out you can use vCenter linked mode. In case you run out of max number of supported objects a vCenter can manage.





Orchestrator allows you to apply actions across a large scale of machines automatically using a workflow process. For instance to apply a changes across a large number of VM's

With Appspeed we can garantee SLA's at the app level. Do assured mogrations and do root cause analysis for performance issues.





VMware aims to work with partners to integrate with the big 4 management platforms.

There are integrations, examples
- BMC remedy and life cycle manager
- CA data center manager (for apps) and stage manager





PSO have operational readiness assessments to help achieve the above.


Tuesday, March 03, 2009

The Financial Meltdown; here's why

There it is... plain and simple, well in as much as the above is simple...

Unfortunately financial experts put all their faith in this formula (to predict risk of an investment -loans for one-) and as a result the money market grew outrageously from 2001 to 2008 when as we know defaulting started to happen which this formula could not predict.... the rest is history...

There full article can be found here

Saturday, February 28, 2009

SQL performance

This talk was created because of persistent misconceptions that virtualization is unfit for and hurts the performance of SQL RDBM's (though this talk is about MS SQL a lot of it can be generalized to be used for other IO intensive workloads as well).







Problem is a reduction in storage spindles is one source of performance problems. Queues can also be a problem.

Sequential reads drop when there are more hosts added to your vmfs volume.





One way to fix this is to set memory reservations.





This pitfall is someone using the hosted product and creating a misconception about ESX.

Newer procs are always better. Decrease in pipeline lengths and increase in caches.





Dell's DVD store is a great benchmarking tool.









Servers that had multiple SQL instances are best broken down in multiple VM's.

When doing PoC's ensure you use target HW for correct modelling.

There is a white paper on VMware.com about this topic.





CapacityIQ

The obligatory statement that what's being discussed is conceptual and may not be the same as the end product starts off the talk..





The problem this tools tries to address; Basically we tend to overprovision because we don't have the right tools to predict better.

Virtualization obviously helps in this area.








There are multiple considerations in virtuality
- resource dependencies
- workload mobility
- Cpu and memory optimizations (memory ballooning)
- storage optimizations (thin provisioning linked clones)





The key is that the envirornment is dynamic.

Again SLA's are key and need to be guarded.




The architecture:












Do what if scenarios. Capacity dashboard





You can set alert triggers, fi send me an email when capacity reaches 80%.





Example of a whatif report




The focus is on intelligent planning using trending and it gives tools to optimize your current environment by identifying idle VM's f.i.



How the products work together.

Virtualization Optimization Capacity Mgmt best practices SAS institute

This session was on best practices re: virtualization Capacity Planning, as per experience of SAS institute.

Some of their findings:
- Memory is more important than CPU
- Memory tracking is important

They started very conservative (as most when new to virtualization) and started with a ratio of 11:1. They then used the SAS tool to track performance and behaviors and from the statistics were able to gather that they weren't using their virtual infrastructure optimally. They the determined that they could increase their consolidation ratio; from 11:1 to 20:1.

The tool name is: SAS IT intelligence for VMware VI.

Along the way they also discovered that the CPU ready measurement is an important measurement to track.


Thursday, February 26, 2009

vCenter Orchestrator introduction





Where does it fit in the overall picture?



Now let's define automation and orchestration



We want to do this because we want to drive down operational costs.






The architecture





It is also extensible through a Java API.

Some of the use cases: