ODA 2.8 – Creating Virtual Machines with the shared repository

ODA version 2.8 is available, and has a much anticipated feature – shared VM template repository.

For those that aren’t familiar with the ODA (Oracle Database Appliance) – this is an engineered system designed specifically for running Oracle databases, and for minimising the time for deployment. It comes RAC-ready with the interconnect etc ready to go, although using RAC is optional. The deployment is automated as is much of the day-to-day maintenance (including the patching of OS, firmware, GI and RDBMS). Using the virtualized deployment option means that you can run Oracle Database on the processors you’re licensed for and use the leftover processors for running virtual machines to house your application servers.

The new ODA 2.8 shared repository allows us to store our VM templates on the shared storage, but more importantly our cloned virtual machines are on the shared storage too. This makes the VM functionality a lot more flexible – it gives us much more space for our templates, and it also means that we can choose which node to start our VM on.

It means that in the event of hardware failure, we can restart our VM on the remaining node.

Importing and cloning VMs on the ODA is extremely simple, and I’m going to demonstrate it right from the top.


1. Download template

First we need to decide which VM template we want to use. For this demonstration I’ve decided I want to use Oracle Enterprise Linux 5.9 and I’m going to download that from Oracle E-Delivery. So I navigate to https://edelivery.oracle.com/linux and choose “Oracle VM Templates”.


ODA now supports the OVA/OVF format and the Linux 5.9 template is provided as such.


We want the paravirtualized template, and we start off the download. This particular template is 402MB so should download quickly. You need to stage it on dom0 (the physical hardware running the VM server). You then need to unzip the file so that your OVA file is visible (I put mine in /tmp as this is only temporary until we’ve imported the template).


2. Create shared template repository

If you don’t already have a shared template repository, you’ll want to create one now. We run the following command on dom1 (aka ODA_BASE – our primary virtual machine).


You will notice that we’re naming our repository “shared1”, creating it in the +DATA diskgroup and giving it a size of 50GB.

3. Import VM template

Next we want to import our VM template into this shared repository.

oakcli import vmtemplate oel_5_9 -assembly /tmp/OVM_OL5U9_x86_64_PVM.ova -repo shared1 -node 0

We run this on ODA_BASE. We’re giving the template a name (oel_5_9), the path to the file on dom0 (not ODA_BASE!!), the repository name and then the node to run this operation on (it is still shared – its just which node will do the work).

Watch out on this step. The OVA file must be placed on dom0 but the command is run on dom1 (ODA_BASE). If you mistakenly place the OVA file on dom1 you’ll get the following error when you try to import the template –

[root@oda-nodea ~]# oakcli import vmtemplate oel_5_9 -assembly /tmp/OVM_OL5U9_x86_64_PVM.ova -repo shared1 -node 0

OAKERR:7044 Error encountered during importing assembly - OAKERR:7005 Invalid template file File /tmp/OVM_OL5U9_x86_64_PVM.ova passed for extraction does not exist

It takes a few minutes to import the template, be patient. Once you’re done, you can see your template in place.

4. Create VM

With the template in place, you can now create a VM using this template. This is called cloning and is performed as follows.

oakcli clone vm my_oel_server -vmtemplate oel_5_9 -repo shared1 -node 0

Again, we’re specifying the node but this isn’t the node that the VM will run on. This is the node which will undertake the cloning activity. We’ll set the node we want the VM to run on shortly.

We can now see our VM has been created but is offline. You’ll also notice the default CPU and memory allocations that come with the template.

5. Configure VM

Next we can configure our new VM. First lets see all of the current settings.

Actually, in this case I don’t want to change any of the defaults. It has chosen to run on node 0 which is what I want. If I did want to change this so that it ran on node 1 then I would run the following command.

oakcli configure vm -prefnode 1

6. Start VM

We’re ready to start the VM.

oakcli start vm my_oel_server


7. Connect to the VM console

Connecting to the VM prior to configuring the networking is unfortunately the one painful step if you’re using SSH access to the ODA. Basically, the ODA will start a VNC server which is used for console access. But you’ll need a graphical session to the ODA to get to it easily. Otherwise you need to set up a tunnel to the right port on dom0 (5901 if this is the first VM) or you need to use X11 forwarding.

I’ll blog about those methods separately, but for now I’ll leave it to you to get that console access working one way or another. I suggest the easiest way is probably to VNC to ODA_BASE (you’ll need to run “vncserver” first to set that up) and then run the “oakcli show vmconsole my_oel_server” command from there. For a long term solution, X11 forwarding together with xming on a windows desktop seems the most efficient method.

8. Connect to the VM console

Once you’re connected to the VM you’ll probably initially see a blank screen. DO NOT PRESS ENTER TO CLEAR IT!! The VM template is waiting for you to specify what you want the hostname to be, and if you press enter now you’ll tell it to use “localhost”. And you can’t go back! If that happens, use “oakcli stop vm” to stop and then start the VM and the script will re-run.

So enter the information it requests – hostname, IP address etc. It will then start SSH and you can now remotely SSH into the server.


And that is it – you’re done and you have a VM running. The repository creation and staging only needs to be done once, so in future you can see that it only takes a matter of minutes to fire up a VM.

8. Move VM

One of the big benefits of the shared repository is no longer being tied to one particular node. So if we want we can now shut down our VM and start it on another node.

Shutting down the VM

Shutting down the VM

Moving the VM

Moving the VM

9. Automatic Failover

We also have the option to automatically fail the VM over to the other node in the event that the preferred node isn’t available.

Automatic failover

Automatic failover

Post a comment or leave a trackback: Trackback URL.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: