Tuesday, November 30, 2010

Ubuntu Server in EC2 Cloud, Easy!

I am starting to create a series of screencasts to demonstrate various topics relating to running Ubuntu on the cloud or as the cloud. The first video demos how easy it is to start your very first Ubuntu server in the Amazon EC2 cloud. If you ever wanted to play with Ubuntu server in the cloud and had any doubts, this video should put them to rest :)



If you think that is cool, and if you want to contribute your own, please grab me.It's real easy to create such screencasts, I can help get you kick-started. And hey you'd be helping the Ubuntu community, and on your way to becoming an online celebrity, what's not to like! If you'd like to follow similar screencasts, subscribe to this youtube channel

Let me know in comments what you would like to see in future screencasts, or whether certain topics interest you. If you are using Ubuntu server in the cloud professionally, I'm very interested to hear back from you. Grab me (kim0) on #ubuntu-cloud on IRC (freenode) for a chat. Awaiting the flood of screencasts :)

Wednesday, November 24, 2010

Ubuntu Cloud Q+A weekly meeting

The Ubuntu cloud community is coming together today [ 3pm UTC/GMT / 7am Pacific / 10am Eastern ] for its first weekly Q+A meeting. If you use Ubuntu as a guest OS on a public cloud, or if you have built your own private cloud infrastructure on top of Ubuntu, or even if you're just interested in any of that cloud babel please do join us in this first meeting for a great chance to connect to other users and developers of Ubuntu cloud technology. Through those online meetings you will get a chance to connect to the rest of the Ubuntu cloud community, share experiences, ask questions, find areas that interest you and perhaps start contributing to them

Information on how to connect and details can be found here

Monday, November 22, 2010

Ubuntu Cloud Screencasts Volunteers

Interested in Ubuntu cloud community ? Want to help ? Awesome! here is your chance

Screencasts are a great way to introduce new-comers to something new. I always find it helpful to view a couple of short videos to "get-a-feel" of thingX before I actually start reading and working on it. That's why I'd like to start a screencast series introducing running Ubuntu in the cloud. The target is to start by simple stuff (no voodoo here sorry) in order to demo how simple running Ubuntu in the cloud really is. Of course this can grow into a gigantic series, however for starters I'd like to focus on basic and very common use cases. Here are a few casts I would like to begin with:
  • Creating your first Ubuntu server in the cloud (GUI, CLI or both)
  • Introducing Ubuntu Cloud-Init technology
  • Customizing (Re-bundling) available Ubuntu images (AMIs)
  • Launching a LAMP app on the cloud
  • Backing up your Ubuntu LAMP cloud instance
  • Creating and Load Balancing a multi-tier LAMP app
This list is by no means set in stone :) This is a dynamic list that will change according to feedback. Feel free to join the ubuntu-cloud mailing list at: https://lists.ubuntu.com/mailman/listinfo/Ubuntu-cloud to discuss and change those topics.

If you're interested to record any of those casts, please do shout at me! You can email me kim0 [AT] ubuntu.com or grab me for a chat in #ubuntu-cloud IRC channel on Freenode

If you're new to all this cloud stuff, and would like to see a screencast covering a certain topic, please let me know in the comments (or email, mailing list, irc ...). If there's some demand on a specific topic, I'll try to cover it. Of course, if you can contribute and cover it yourself, that would be awesome indeed. After all, it's all about the community. Those wishing more information about recording screencasts, can read more information here

Awaiting the flood of excited contributors :)

Thursday, November 18, 2010

Cloud Computing 101, p2

Continuing my part-1 post about cloud computing basics, this second post should continue to define different types of "clouds" as well as what you gain and loose by using them

If you look at a cloud solution, it's really a bunch of software layers stacked on top of each other. You have the hardware (servers, disks, switches, routers), you have bare metal operating systems, hypervisors, virtual servers and inside those you have programming languages (python, java, php), development frameworks, database servers, and your own business logic code living on top! Clouds are categorized as either IaaS, PaaS or SaaS. The type of cloud is basically defined by which layers of the stack the cloud abstracts away from you, and which layers you "own" and control. Another categorization scheme is private, public and hybrid clouds. Let's take a quick tour on what each of those cloud types mean

IaaS is Infrastructure as a Service. The cloud abstracts away as little as possible from you. Basically the cloud provides you with virtual servers, networking and storage and that's it. You use those building blocks, just as you would use them in any physical datacenter to build your own compute infrastructure. The only difference is that you don't worry about how the servers are powered, cooled, what brand of disk or SAN is used ..etc. All you care about is your provider's SLA as mentioned in part-1. Other than that, it's business as usual

PaaS is Platform as a Service. The cloud is abstracting away the infrastructure and some more. The cloud in this case is no longer composed of virtual servers and disks, it is however a "development framework". When you write code, you are coding against the platform, against the cloud itself. A PaaS cloud, assuming you're creating a web application, would tell you how to route requests to your handlers, how to write code to handle specific requests, would provide an API for storage, would perhaps provide an API for database (SQL, or noSQL doesn't matter here). Your application code is written against the API of the cloud. As such, you have no idea about "low level" details such as networking, IP addresses, failed servers or even the number of virtual servers running your code! So essentially you upload your code archive and it just runs on the cloud, no questions asked

SaaS is Software as a Service. In this case, you're only using software running on the cloud that someone else had written. If you've used facebook, Gmail, linkedin, Google docs, SalesForce ...etc that's it. In essence the service you're getting is the actual final "application" you need. This is the highest level of abstraction. You do not concern yourself with infrastructure, nor with code to build an application with. You pay to use the application itself and the SLAs you get are for the application availability and your data availability

What type of cloud suits you best, is a question that needs some thought and that depends on one's set of requirements. IaaS clouds provide the least abstraction and the most control! They are a good first step to migrating off-the-shelf software to the cloud and benefiting from cheap, on-demand, elastic infrastructure. Since they provide the least abstraction, if you'd like a scalable infrastructure, you would have to do all the work yourself. It is generally not so painful to migrate from an IaaS cloud to another. PaaS clouds however, since they provide higher levels of abstraction, are much easier to manage and scale. It essentially auto-scales delivering cloud computing holy grail. However the big price is that you generally have to rewrite your application to the particular cloud platform. Not only is that painful, but it also may lock you in to the cloud vendor making it extremely hard to change vendors afterwards. Which is why I think the open-source world needs great open-source PaaS cloud frameworks (Have a favorite? drop me a line in the comments section). If a SaaS application meets your needs at a good price point, then the only potential disadvantage would be data lock-in, as well as the (in)ability to mashup the SaaS application with other tools. A good piece of advice here is to choose SaaS applications that provide full API access to your data such that you can easily pull off all data and meta-data should you need to.

A different categorization of clouds is private vs public. Private simply means that the cloud infrastructure is built in-house behind the firewall. For example you could turn your corporate datacenter into a private cloud. The benefits being, you gain better efficiency and datacenter utilization across different departments as well as being able to provide an elastic and fast response to your enterprise's departmental IT needs. Should you want to start playing with a private cloud solution, Ubuntu Enterprise Cloud is a good start. Public clouds pertain to a cloud run by a third party service provider. A public cloud is either Iaas, PaaS or SaaS or even a mix of some. Why you would want to migrate some workloads to a public cloud, is simply because public cloud vendors due to their economies of scale are able to provide equivalent if not better service, at a significantly lower price point coupled by the ability to instantly grow. A hybrid cloud on the other hand is a private cloud that can "burst" to a public cloud when its resources are exhausted. The goal is to bring the best of both worlds, the control, data-security of private clouds with the elasticity and economies of large scale public clouds. More and more work-loads are being migrated to the cloud and it's all just starting.

Has your organization migrated some workloads to the cloud already, are you planning on that? Are you planning on building your own private cloud? Please let me know in the comments, let me know the motivations and the challenges you faced. If you have any questions in general, let me know as well

Friday, November 12, 2010

Show Off Ubuntu Desktop on Cloud

Want to show off your Ubuntu desktop in the cloud ? Perhaps you want to demo it to some Windows or OSX friends. Perhaps new users at your loco event want to play with Ubuntu for a bit. Well, look no further. In this article I will create an Ubuntu maverick 10.10 desktop in the Amazon ec2 cloud, connect to it using the x2go terminal server, which leverages the excellent NX remote display libraries

Start by launching the following AMI (ami-1a837773). I chose the official Ubuntu 32bit ami, so that we can run it on a m1.small instance. If you're not sure how to launch this instance, you might want to review my point-n-click guide. After launching the instance and logging in, I do my customary

ssh ubuntu@xxxxx   #replace with your instance's public dns name
sudo -i
screen
apt-get update && apt-get dist-upgrade -y

Let's install x2go terminal server
# gpg --keyserver wwwkeys.eu.pgp.net --recv-keys C509840B96F89133
# gpg -a --export C509840B96F89133 | apt-key add -
# echo "deb http://x2go.obviously-nice.de/deb/ lenny main" >> /etc/apt/sources.list
# apt-get update
# apt-get install x2goserver-home

Optional step: Switch system to libjpeg-turbo

I like to break my Ubuntu system by installing unsupported software, so I will be switching the system's default libjpeg into a newer variant that utilizes your CPU's SIMD instruction set to provide better performance. Since connecting to a desktop remotely, heavily utilizes jpeg compression I suspected this step would provide me a performance boost. It is however not recommended, especially to someone who wouldn't be comfortable fixing his system using console only. You need to do the following on the ec2 server and on your own system. I am assuming 32bit systems, you can 32/64 bit versions here
# wget 'http://sourceforge.net/projects/libjpeg-turbo/files/1.0.1/libjpeg-turbo_1.0.1_i386.deb/download' -O libjpeg-turbo_1.0.1_i386.deb
# dpkg -i libjpeg-turbo_1.0.1_i386.deb
Selecting previously deselected package libjpeg-turbo.
(Reading database ... 25967 files and directories currently installed.)
Unpacking libjpeg-turbo (from libjpeg-turbo_1.0.1_i386.deb) ...
Setting up libjpeg-turbo (1.0.1-20100909) ...

# ls -l /usr/lib/libjpeg.so.62
lrwxrwxrwx 1 root root 17 2010-11-12 12:35 /usr/lib/libjpeg.so.62 -> libjpeg.so.62.0.0
# rm -rf /usr/lib/libjpeg.so.62
# ln -s /opt/libjpeg-turbo/lib/libjpeg.so.62.0.0 /usr/lib/libjpeg.so.62
End-Of-Optional-Step

Install the Ubuntu desktop itself (The GUI)
apt-get install ubuntu-desktop
This takes a good 10-15 minutes. After which your system is ready. Grab yourself a favourite x2go client here. Send your friends links to the Windows and OSX clients and let them see the light :) In my case I just used my Ubuntu system to connect remotely. So I added the same repo we added before and installed "x2goclient" which is a qt4 client. Here are the settings I used

x2go-client-settings

I am using my ssh key to login to the Ubuntu virtual desktop. If you're using Win/OSX and perhaps wouldn't want to use the ssh key, reset the Ubuntu user password and connect using the password. Once connected we see our familiar and beautiful Ubuntu desktop

ubuntu-desktop-x2go-ec2

I was pleasantly surprised to hear the drums-beating sound of Ubuntu booting! wow! That was just awesome. x2go uses pulseaudio to remotely connect and bring audio right to your desktop. I also could easily forward my local files to the instance in the cloud. Anyone already using Ubuntu desktop in a cloud ? let me know about it ? What kind of use cases you'd use such a setup for ? If you have some fancy setup, let me know about it as well

Wednesday, November 10, 2010

OpenStack dev env on EC2

Just as I previously blogged about running your own UEC on top of EC2 (cloud on cloud), here is another cloud on cloud post showing you how to run an OpenStack compute development environment on top of EC2. All of the heavy lifting is really done by the awesome novascript! I started by launching Ubuntu server 10.10 64bit (ami-688c7801) on a m1.large instance. If you're not sure how to get this done, please check my visual pointnclick guide to launching Ubuntu VMs on EC2

Once ssh'ed into my Ubuntu server instance I fire an update
sudo -i
apt-get update && apt-get dist-upgrade

Let's see the available ephemeral storage
root@ip-10-212-187-80:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1             9.9G  579M  8.8G   7% /
none                  3.7G  120K  3.7G   1% /dev
none                  3.7G     0  3.7G   0% /dev/shm
none                  3.7G   48K  3.7G   1% /var/run
none                  3.7G     0  3.7G   0% /var/lock
/dev/sdb              414G  199M  393G   1% /mnt

As you can see, /mnt is auto-mounted for us. We don't really need this. For nova (OpenStack compute component) to start it needs an LVM setup with a LVM volume group called "nova-volumes", so we unmount that /mnt and use sdb for our LVM purposes

# umount /dev/sdb
# apt-get install lvm2

root@ip-10-212-187-80:~# pvcreate /dev/sdb
  Physical volume "/dev/sdb" successfully created
root@ip-10-212-187-80:~# vgcreate nova-volumes /dev/sdb
  Volume group "nova-volumes" successfully created
root@ip-10-212-187-80:~# ls -ld /dev/nova*
ls: cannot access /dev/nova*: No such file or directory
root@ip-10-212-187-80:~# lvcreate -n foo -L1M nova-volumes
  Rounding up size to full physical extent 4.00 MiB
  Logical volume "foo" created
root@ip-10-212-187-80:~# ls -ld /dev/nova*
drwxr-xr-x 2 root root 60 2010-11-10 10:27 /dev/nova-volumes

I had to create an arbitrary volume named "foo" just to get /dev/nova-volumes to be created. If there's some other better way, let me know folks. Let's go checkout the novascript. You need to do that somewhere that has more open permissions than /root :) so /opt is perhaps a good choice

# cd /opt
# apt-get install git -y
# git clone https://github.com/vishvananda/novascript.git
Initialized empty Git repository in /opt/novascript/.git/
remote: Counting objects: 121, done.
remote: Compressing objects: 100% (114/114), done.
remote: Total 121 (delta 42), reused 0 (delta 0)
Receiving objects: 100% (121/121), 16.62 KiB, done.
Resolving deltas: 100% (42/42), done.

From here, we simply follow the novascript instructions to download and install all components
# cd novascript/
# ./nova.sh branch
# ./nova.sh install
# ./nova.sh run

Watch huge amounts of text scroll by as all components are installed. The final "run" line starts a "GNU/screen" session with all nova components running in screen windows. That is just awesome! For some reason though, my first run was unsuccessful. I had to detach from screen, ctrl-c kill it. I then tried starting the nova-api component manually, which did work fine! I then tried to run the script again, and strangely enough this time it worked flawlessly. Probably an initialization thing only. Thought I'd mention this in case any of you guys face this issue. Here's what I did, which you may or may not have to do
./nova/bin/nova-api --flagfile=/etc/nova/nova-manage.conf
# ./nova.sh run   # works this time .. duh

Almost there! Nova's components are now running inside screen. You're dropped into screen window number 7. From there we proceed to create some keys, launch a first instance and watch it spring to life

# cd /tmp/
# euca-add-keypair test > test.pem
# euca-run-instances -k test -t m1.tiny ami-tiny
RESERVATION     r-yehvnkwa      admin
INSTANCE        i-3fxfo2        ami-tiny        10.0.0.3        10.0.0.3        scheduling      test (admin, None)      0               m1.tiny 2010-11-10 10:50:27.337898                      
# euca-describe-instances
RESERVATION     r-yehvnkwa      admin
INSTANCE        i-3fxfo2        ami-tiny        10.0.0.3        10.0.0.3        launching       test (admin, ip-10-212-187-80)  0               m1.tiny 2010-11-10 10:50:27.337898                      
# euca-describe-instances
RESERVATION     r-yehvnkwa      admin
INSTANCE        i-3fxfo2        ami-tiny        10.0.0.3        10.0.0.3        running test (admin, ip-10-212-187-80)  0               m1.tiny 2010-11-10 10:50:27.337898

Let's ssh right in
# chmod 600 test.pem
# ssh -i test.pem root@10.0.0.3
The authenticity of host '10.0.0.3 (10.0.0.3)' can't be established.
RSA key fingerprint is ab:96:c3:ee:22:84:28:2f:77:ad:d9:a9:52:63:7c:f9.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.0.0.3' (RSA) to the list of known hosts.
--
-- This lightweight software stack was created with FastScale Stack Manager
-- For information on the FastScale Stack Manager product,
-- please visit www.fastscale.com
--
-bash-3.2# #Yoohoo nova on ec2

Once you detach from screen, all nova services are killed one by one to clean things up. On that setup, you can immediately hack on the code, then re-launch nova components to see the effect. You can use bzr to update the codebase and so on. In case you're wondering if this works on KVM on your local machine, it does work beautifully! Of course instead of the LVM setup on the ephemeral storage step, you'd have to pass a second KVM disk to the VM. Other than that, it's about the same. How awesome is that. Let me know guys if you have any questions or comments, also feel free to jump on IRC on #ubuntu-cloud and grab me (kim0). Have fun

Friday, November 5, 2010

Cloud Computing 101

I get asked every now and then, what is this cloud thing, why is it cool, why is everyone talking about it, why should I care! As you see, that's a lot of Whys! In this post I attempt to put an end to those Whys with some Answers. Most of the blogosphere around cloud computing, gets caught in fine details, and the latest bits and pieces of technology, while ignoring newcomers who are not quite sure why is everyone so hyped about cloud to begin with. I hope to help newcomers gain a better view of what the fuss is all about

Let's assume for a moment, you're called Jim and you're the IT manager at a fictional organization. Your boss walks in, tells you the development team is ready to deploy their ultra scalable video sharing web application. It's very hard to determine how well the market accepts the new web app, we could be the next Youtube, or we could have a much harder start. We estimate to need anywhere between 10 and 100 servers the first couple of weeks, and anywhere between 1 and 50 terabytes of storage depending on market demand. The boss opens the door ready to leave, then he turns around and tells you, can you please have that ready by the end of this week! Talk about poor management in this hypothetical company, in reality things are not that bad, well and in many cases not much better either. So, if you were Jim, you would now probably be thinking of ways to end your life, or at least you'd be writing a farewell email! With the advent of cloud computing however, you have other options. You can snap your fingers, and have a hundred servers created, snap them again and have 50TB of storage appear right next to them, ready to serve you. If you think that's more "magical" than the iPad launch, you would be right although I'm sure Steve Jobs would disagree. That magic is what hypes many IT people about clouds! Well technically, instead of snapping your fingers, you'd perform an API call to a cloud provider. That means you either run a command or click a button in some management tool and those resources spring to life! Can you already feel how enabling this cloud thing is!

Cloud is called "cloud" because you don't really know what's inside it or how it is built. A cloud icon was and is the standard representation of the "Internet" or a remote network that you don't really care about, or don't control. It is in essence a black box to you, with traffic going in and coming out the other end. You should not know or care how it is built, you're not involved in its daily operation. It provides you "services" that you use. In Jim's case, those services were large numbers of servers, storage and of course networking (you still need a way to access those remote resources anyway!). Jim requested his 50TB of storage and got them, he does not really know what is the physical backing store to this storage service. Are those terabytes of storage (which are holding his company's most precious data) living on a fiber connected high-end SAN storage, low-end SAN or is it a NAS filer. How are the servers accessing the storage network, is it high performance infiniband? fiber connections ? iscsi ? AoE? Lots of options, but Jim doesn't really know. Whether or not he cares is a different story. I would say, he should not care about the technology used to build the solution, however he should care about the SLA his money is buying him. i.e. When you buy storage you are not only buying capacity, you're also buying redundancy and performance. Which is probably why many IT people care about the brand of the SAN storage, and the server to storage connectivity to begin with. What they really care about is "Is my data safe on that storage", "Will that storage deliver the performance I need". They really care about the SLA the cloud services provider is able to achieve

A common misconception about "cloud" is that cloud equals virtualization. This is not really true. You could very well build a cloud solution that does not use virtualization, instead a physical server would be powered on, PXE booted, deployed and be ready to service you! It would probably end up being too expensive, inflexible, and with limited billing options but it would not be impossible to build. That's why most commercial cloud vendors end up using some kind of virtualization technology as a building block in their "cloud compute service" i.e. the CPU and memory cloud layer. Virtualization is a neat trick to split up a physical server into multiple virtual machines, each running its own operating system and each having its own completely separate software stack. It enables the cloud service provider to carve up different sizes of virtual servers from the underlying physical servers. As a cloud consumer, you end up paying for only the size your workload needs.

The reason why cloud computing usually ends up being compared to the electricity grid is because both provide you with on-demand services meeting a certain SLA, and that you end up paying for only the amount you used. In case of electricity, you don't care what equipment the electricity company is using to generate your current, you only care that it meets a certain SLA (say 220V able to pump current of upto 100A, and being online for 99.999% of the time). You could run your own generators, but it would be inefficient (expensive) to do so, it would be a hassle to keep everything running, it would require skilled workers keeping everything online, it would not scale if you suddenly need more current! That is why most people do not run their own electricity generators and instead depend on the grid. However, with all the disadvantages mentioned, some businesses still choose to own and operate their own diesel engines for generating electricity at least as a backup solution. Why that is the case, is because those businesses are seeking more "security". They want to be in control, they don't want the electricity company to control such an important resource to their business. Everything mentioned so far about the electricity company applies to cloud vendors as well. Cloud vendors are the IT equivalent of the electricity grid. Running your own datacenter is the analogue of owning a diesel engine. Of course almost every business nowadays owns, builds and operates its own datacenter (diesel engine). However that might be changing rapidly, we're already seeing signs of workloads shifting into the cloud, which is what all the fuss is about. Cloud is the electricity grid of the IT world, and perhaps in the no too distant future it would be powering the vast majority of our personal and professional IT needs. Cloud is all about the commoditization of IT resources and services, coupled with a new business model for consumption lowering the entry barrier for smaller businesses and helping them focus on their core competency instead of focusing on IT

I hope to have helped shed some light on the topic, I'll probably be writing a part-2 soon touching on types and key properties of a cloud as well as adoption barriers and compromises. I understand many "cloud people" disagree about what qualifies as a cloud, it's definition and well basically everything about cloud is debatable, do let me know (politely :) if you disagree with any of the points mentioned. Let me know in the comments what you think are the key properties of a cloud

Update: Continue reading part 2

Tuesday, November 2, 2010

Egypt LoCo Maverick release party

The fun is everywhere :)

Toulan طولان









A link to the whole set
http://www.flickr.com/photos/maggieosama/sets/72157625277893568/show/

Ubuntu is free, fun and global!