Guides

Upgrading vCenter Server Appliance 5.5 to 6.0 using CLI
In 6.0 the standard installation and upgrade of the vCenter Server Appliance has changed to an ISO which you can mount in Windows. This ISO provides a web interface. This interfaces asks you to install the Client Integration Plugin 6.0, after which you can use the web interface to install or upgrade your vCenter Server Appliance.
Of course, this gives us Unix users another hurdle to overcome with installing the vSphere environment. Also, the Client Integration Plugin has some issues working with the latest versions of Chrome and Firefox. Lastly, hardly anybody likes using a web interface for this kind of installations.
Luckily, VMware has been kind enough to provide us with a CLI installer as well! I’ve seen a couple of blog posts about using the CLI installer to install a new VCSA, but not as much about upgrading an existing VCSA. So i decided to do a little write-up providing some examples.
Overview of the upgrade
The tool will use a json template file containing all the information to perform the upgrade. It uses the information to first deploy a new VCSA VM on a target host. This new VCSA VM is provisioned with a temporary network. It will then migrate all the data from the existing VCSA to the new one. Once this is done, it will shut down the existing VCSA and reconfigure the network on the new VCSA to take all the settings from the old VCSA.
JSON template
Below is an example of a JSON template file that can be used to upgrade a 5.5 VCSA to a 6.0 VCSA. There are more templates inside the ISO (folder vcsa-cli-installer/templates) which you can use, but i’ve noticed some issues with these templates missing important sections.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
{ "__version": "1.0", "__comments": "Sample template to upgrade a vCenter Server with an embedded Platform Services Controller from 5.5 to 6.0.", "source.vc": { "esx": { "hostname": "<IP of ESXi with current vCenter on>", "username": "root", "password": "vmware" }, "vc.vcsa": { "hostname": "<IP of current vCenter", "username": "administrator@vsphere.local", "password": "vmware", "root.password": "vmware" } }, "target.vcsa": { "appliance": { "deployment.network": "<Name of your Management network Port group on your target ESXi>", "deployment.option": "<tiny|small|medium|large>", "name": "<VM name, this has to be different from the current vCenter VM>", "thin.disk.mode": true }, "os": { "ssh.enable": true }, "sso": { "site-name": "First-Default-Site" }, "temporary.network": { "hostname": "<Temporary hostname, does not have to be DNS resolvable>", "dns.servers": [ "<First DNS server IP>", "<Second DNS server IP>" ], "gateway": "<Gateway IP>", "ip": "<Temporary IP for migration>", "ip.family": "ipv4", "mode": "static", "prefix": "<network prefix, for instance: 24>" }, "esx": { "hostname": "<IP of ESXi to which the new VCSA should be placed on>", "username": "root", "password": "vmware", "datastore": "<The datastore name inside the target ESXi where to store the VCSA VM>" } } } |
Of course I kept some values to the default, but i’m sure you can figure out what to change where. There are a couple of important mentions I would like to mention:
- username and password in your source.vc > vc.vcsa section have to be the SSO administrator user and password (default user = administrator@vsphere.local, default pass = vmware)
- target.vcsa > appliance > name value is the name the VM will get, this has to be unique in your environment, so it can not be the same as your current VCSA, it has no impact on the hostname
- target.vcsa > sso > site-name value is just for your SSO, it has to be filled in, but just do something simple (‘First-Default-Site’ should be fine)
- target.vcsa > temporary.network: This is just temporary for during the upgrade/migration. After the migration, all the network sections are taken from the old VCSA.
- target.vcsa > esx : This is the info of the ESXi where you want to place the new VCSA VM, can be the same as the source, can be a different one. Just make sure the info is correct (if confused with the POD43 file: my local datastores have been named the same as the ESXi IP, to easily differentiate.)
Running the CLI installer
I will run this installer directly from the ISO mounted on /mnt/vcsa on a Linux machine.
I would first suggest to do a dry run, you can do so with the following command:
1 |
/mnt/vcsa/vcsa-cli-installer/lin64/vcsa-deploy upgrade --verify-only --accept-eula --no-esx-ssl-verify vcsa-upgrade-template.json |
This command will verify the configuration and all the connectivity. It will return a list of warnings and errors. Some of the more common warnings and errors you might encounter:
- Warnings about Postgresql password that will be the same as the root password of the new VCSA, this can be safely ignored.
- Warnings about port 22, this can also be safely ignored, just make sure the old and new VCSA’s can communicate through SSH
- Errors about SSO and certificates: This will prevent any upgrade, so this is something you will have to look at. Most of the time it’s an indication that your certificates were generated with a different hostname or IP than currently used. You can rectify this by going to the 5.5 VCSA’s administration web interface check that the hostname, IP and DNS settings are all correct and regenerate the certificates if needed (this requires a reboot).
After you fixed any errors, you can run the command without the --verify-only option:
1 |
/mnt/vcsa/vcsa-cli-installer/lin64/vcsa-deploy upgrade --accept-eula --no-esx-ssl-verify vcsa-upgrade-template.json |
This will start the upgrade and migration, just follow along with what is happening, you get some good info on the progress. It can take a while to finish (half an hour to an hour, easily. If you have a slow connection between the machine you are running the command and the appliances & esxi’s, it might take longer for the data transfers)

vCenter 5.5 Server Appliance quirks
Last week I upgraded my whole vSphere 5.1 environment to 5.5 and migrated to the vCenter 5.5 Server Appliance (VSA). Overall, I’m happy with this migration as the appliance gives me everything i need and the new web client works amazingly well, both with Mac OS X and Windows.
But there are a few quirks and small issues with it. Nothing to serious, and as i understand it, the VMware engineers are looking into it, but for those who are experiencing these issues, I wanted to provide a bit of explanation on how to fix them.
Quick stats on hostname is not up to date
The first issue I noticed, was a message that kept appearing in the web client when I was looking at the summary of my hosts. At first I thought that there was a DNS or connection issue, but i was capable of managing my hosts, so that was all good.
Starting to investigate the issue on internet, I noticed a few people reporting this issue, and apparently VMware already posted a KB article (KB 2061008) on it.
Let’s go to the simple steps on how to fix this on the VSA:
- Make sure SSH is enabled in your VSA admin panel:
- SSH to the VSA with user root and use the root password from the admin panel
- Copy the /etc/vmware-vpx/vpxd.cfg file to a save location, you will keep this as a backup
- Open the /etc/vmware-vpx/vpxd.cfg file with an editor
- Locate the </vpxd> tag
- Add the following text above that tag:
1234<quickStats><HostStatsCheck>false</HostStatsCheck><ConfigIssues>false</ConfigIssues></quickStats> - It should more or less look like this:
- Save the file
- Restart your VSA, the easiest way is just to reboot it using the admin panel, or using the reboot command.
If you ever update the VSA, check the release notes, if this bug is fixed, you might want to remove these config values again.
Unable to connect to HTML5 VM Console
After a reboot of my VSA, I was unable to open the HTML5 VM Console from the web client. I got “Could not connect to x.x.x.x:7331”, the service seemed down. VMware is aware of this issue and a KB article (KB 2060604) is available.
The cause of this issue is a missing environment variable (VMWARE_JAVA_HOME). To make VSA aware of this variable, you can follow these steps:
- Make sure SSH is enabled in your VSA admin panel (see screenshot in step 1 of the issue above)
- SSH to the VSA with user root and the root password from the admin panel
- Open the /usr/lib/vmware-vsphere-client/server/wrapper/conf/wrapper.conf file with an editor
- Locate the Environment Variables part
- Add the following text to the list of environment variables:
1set.default.VMWARE_JAVA_HOME=/usr/java/jre-vmware - It should look more or less like this:
- Save the file
- Restart the vSphere Web client using:
1/etc/init.d/vsphere-client restart
That should fix the issue and the HTML5 VM Console should work fine!

Migrate vCenter 5.1 Standard to vCenter 5.5 Server Appliance with Distributed vSwitch and Update Manager
At VMworld San Fransisco, VMware announced vSphere 5.5 and they officially released it a couple of days ago. With this new version of vSphere, the vCenter Server Appliance has been updated as well.
With this new version, the maximums have been increased. The vCenter Server Appliance was only usable in small environments with a maximum of 5 hosts and a 50 VM’s with the internal database. If you had more hosts and/or VMs, you had to connect your vCenter to an Oracle database. (Thanks Bert for noting this)
As of version 5.5, these limitations have been changed to a 100 hosts and 3000 VMs. With this change, vCenter Server Appliance becomes a viable alternative to a full fledged install on a Windows Server.
Until now I have always used vCenter as a full fledged install on Windows Server, with an SQL Server in my home lab. I used this setup to get a feel for running, maintaining and upgrading vCenter and all it’s components, while using multiple windows servers in a test domain. But with this new release, I’ve decided to migrate to the appliance and do a semi-fresh install.
I say semi-fresh, as I will migrate a few settings to this new vCenter server. Most settings will be handled manually or through the hosts, but the Distributed vSwitch are a bit more complicated. So I wanted to write down the steps I used to migrate from my standard setup to the appliance.
1. Export DvSwitch
You can export your DvSwitch using the web client with a few easy steps.
Go to the Distributed vSwitch you want to migrate and right click it, go to All vCenter actions and select Export Configuration. Make sure you export all port groups and save the file to a convenient location.
2. Create a cluster in the new vCenter Server Appliance
Make sure the cluster has the same settings as the one in the old vCenter server. Focus on the EVC settings, the rest can be as you choose, but this is rather important if you are migrating live hosts and VMs.
3. Disable High Availability on the cluster
As you need to move hosts away from the cluster, you will have to disable the High Availability on it.
4. Disconnect the hosts from the old vCenter server and connect them to the new vCenter Server Appliance
At this point, you need to disconnect the hosts from the old vCenter server and connect them to the new vCenter Server Appliance. This might take a while, so be patient and watch the progress.
Your hosts might show a warning indicating an issue, but this can be safely ignored as it will be solved after the import of the Distributed vSwitch
5. Import the Distributed vSwitch into the new vCenter Appliance Server
Go to the network tab and right click on the cluster, go to All vCenter actions and select Import Distributed Switch.
Make sure you select the ‘Preserve original distributed switch and port group identifiers’.
Give it a bit of time and your hosts will recognise the switch, and everything will be synced and connected again.
6. Update manager
There is one small issue with the great vCenter Server Appliance, it does lack an Update Manager in it’s regular setup. Luckily, you can connect a standard Update Manager install to the vCenter Server Appliance. I would suggest you just follow the standard guide. This one is still for vSphere 5.1, but the 5.5 version hasn’t changed much, so it should be pretty straightforward.
*update* Added extra information on the limitation of vCenter Server Appliance 5.1 (Oracle DB possibility)

How to setup an IPv6-only network with NAT64, DNS64 and Shorewall
Goal
The goal of this article is to help people to set up a network that is IPv6 Only (except for the gateway) and does allow the users to access IPv4 servers beyond the gateway.
Overview
If you follow the news surrounding the IPv4 exhaustion, you will now that IPv4 is running out of space rapidly (currently Ripe is allocating the last /8 address space). So, it’s time to start thinking about moving to IPv6.
I have been using a 6-in-4 tunnel from Sixxs for a couple of years now, using this i have set up a dual stack network with my own /48 subnet. This setup is fun and made it possible for me to test IPv6 in real life.
I’ve been using this setup for a while now, and thou it’s an improvement on getting ready for IPv6, it still has an IPv4 network as well. The ultimate goal should be to use only IPv6 in my internal network. The downside of such a network is the fact that i would be unable to reach ‘old’ IPv4 servers which haven’t got an IPv6 address.
To solve this, i decided to configure an IPv6 only network in a test environment, using NAT64 and DNS64. DNS64 basically provides IPv6 addresses for hostnames which only return an IPv4 address, using a prefix. NAT64 accepts connections to those special IPv6 addresses and translates the connection to a IPv4 connection. It’s doing the same thing as normal NAT, translating IP addresses, just across different IP versions.
I’m using this guide as a form of documentation. I might go a bit fast through a couple of sections, but that’s mainly because i assume you are able to configure a basic Bind9 server or Shorewall setup.
It’s also important to note that i will not provide exact commands to install a package, but all of the packages should be available in most package systems (aptitude/apt, yum, …).
Initial setup
Lets start with making an overview of my setup, and what information you should need to set this up.
My gateway is a Ubuntu 12.04 LTS server which is connected to 2 networks and has a Sixxs connection using AICCU, which is configured to start at boot. In total this gives me 3 interfaces (and loopback, but we’ll disregard that).
Interfaces configurations:
- eth0: DHCP (internet)
- eth1: Static IPv6: 2001:1d3:7f2:10::1/64 (you receive a /48 from Sixxs, split it up in at least /64 subnets)
- sixxs: IPv6-in-IPv4 tunnel
List of software used. This contains only daemons specific for this guide:
- Bind9: DNS server
- Shorewall
- Shorewall6
- radvd
I’ll assume that your gateway is able to connect to the internet using DHCP on the eth0 interface, which provides you with an IP(v4) from your ISP and that your Sixxs tunnel is configured.
Bind9
Configure bind9 as you like, just make sure you have configured forwarders that can be used and recursion is enabled.
Shorewall
Make sure IP_FORWARDING=On. Here are my basic configuration files:
Zones
1 2 3 |
#ZONE TYPE OPTIONS IN OPTIONS OUT OPTIONS fw firewall net ipv4 |
Interfaces
1 2 |
#ZONE INTERFACE BROADCAST OPTIONS net eth0 detect dhcp,tcpflags,routefilter,nosmurfs,logmartians |
No need to add eth1, as it does not have a IPv4 address.
Policy
1 2 3 4 5 6 7 |
########################################################################### #SOURCE DEST POLICY LOG LIMIT:BURST # LEVEL net $FW DROP info $FW net ACCEPT # CATCH ALL all all REJECT info |
For security reasons i drop everything coming in from the dangerous internet…
Shorewall6
As with Shorewall, make sure IP_FORWARDING=On. The purpose of this configuration is to block all IPv6 traffic coming in from the internet, but allow clients connected to the gateway through the internal network, to access the internet through the Sixxs tunnel.
Basic configuration files:
Zones
1 2 3 4 |
#ZONE DISPLAY COMMENTS fw firewall net ipv6 loc ipv6 |
Interfaces
1 2 3 |
#ZONE INTERFACE BROADCAST OPTIONS net sixxs detect tcpflags,nosmurfs,forward=1 loc eth1 detect tcpflags,forward=1 |
Policy
1 2 3 4 5 6 7 8 9 10 11 |
########################################################################### #SOURCE DEST POLICY LOG LIMIT:BURST # LEVEL $FW net ACCEPT $FW loc ACCEPT loc net ACCEPT loc $FW ACCEPT net $FW DROP info net loc DROP info # CATCH ALL all all REJECT info |
Again i block everything coming in from the dangerous internet.
radvd
radvd is used to provide stateless configuration for IPv6 interfaces. After you acquired your subnet from Sixxs and have configured your tunnel, you can setup radvd to provide your clients with the information they need to access the IPv6 network.
My configuration:
1 2 3 4 5 6 7 8 9 10 |
interface eth0 { AdvSendAdvert on; prefix 2001:1d3:7f2:10::/64 { AdvOnLink on; AdvAutonomous on; AdvRouterAddr on; }; }; |
Configuring DNS64
Configuring DNS64 is not that hard, you just need to tell Bind that it has to return a special AAAA-record if a client from a specific range requests the IP address of a hostname that has no AAAA record. This AAAA-record is constructed by bind, using a prefix. When a client tries to connect to an IP starting with the prefix, it will be forwarded (through routing) to the NAT64 setup.
Prefix
First of, you should decide on a prefix to use. This prefix must be part of your personal /48 subnet (so it doesn’t interfere with other possible real IP addresses). You must at least commit a /96 subnet to this prefix.
As 2001:1d3:7f2::/48 is the subnet provided by Sixxs to me, i decided to use 2001:1d3:7f2:ffff::/96 as my prefix.
radvd configuration
We need to make the clients on the network aware of the IPv6 DNS server on the network, so change your /etc/radvd.conf and add the RDNSS option :
1 2 3 4 5 6 7 8 9 10 11 12 13 |
interface eth0 { AdvSendAdvert on; prefix 2001:1d3:7f2:10::/64 { AdvOnLink on; AdvAutonomous on; AdvRouterAddr on; }; RDNSS 2001:1d3:7f2:10::1 { }; }; |
Bind configuration
Now we need to change named.conf.options to contain the following section (inside options, for instance after your forwarders):
1 2 3 4 5 |
dns64 2001:1d3:7f2:ffff::/96 { clients { 2001:1d3:7f2:10::/64; }; }; |
The clients option makes sure that only clients on the network connected to eth0 can use the DNS64 service.
Configuring NAT64
To configure NAT64, you have to install an extra daemon: Tayga. Tayga creates a new interface on your server which basically is an internal tunnel through which connections to your prefix network are routed and translated to IPv4 connections. This means both firewalls (Shorewall and Shorewall6) need to be aware of this interface.
Tayga configuration
You will have to make some changes in the Tayga configuration (/etc/tayga.conf), here are the settings i have changed and use:
1 2 3 4 |
tun-device nat64 ipv4-addr 192.168.10.1 prefix 2001:1d3:7f2:ffff::/96 dynamic-pool 192.168.10.0/24 |
Tayga needs an IPv4 IP address as it needs to communicate with the IPv4 network, it also needs an IPv6 address, but it determines that itself.
The dynamic-pool option is used to select IP addresses for the IPv6 clients. So each IPv6 client that wants to connect to an IPv4 server gets an IPv4 address linked to it in Tayga (so not on the client, only for internal NAT purposes). If you use a /24 subnet, you can basically have 254 clients connecting to IPv4 servers simultaneously. If you need more, you are allowed to use bigger subnets. Just make sure you use the ranges specified by Ripe for internal use.
Shorewall configuration
I choose to configure shorewall in a similar fashion as with the IPv4 trafic, so all traffic from internet is blocked, all traffic to the internet is allowed. You need to make Shorewall aware of the nat64 interface, as it needs to allow IPv4 traffic to go to and from it, otherwise the translation won’t work.
These are the changes i made to the Shorewall (IPv4) configurations:
Zones
1 2 3 4 |
#ZONE TYPE OPTIONS IN OPTIONS OUT OPTIONS fw firewall net ipv4 nat64 ipv4 |
Interfaces
1 2 3 |
#ZONE INTERFACE BROADCAST OPTIONS net eth0 detect dhcp,tcpflags,routefilter,nosmurfs,logmartians nat64 nat64 detect dhcp,tcpflags,routefilter,nosmurfs,logmartians,routeback |
No need to add eth1, as it does not have a IPv4 address.
Policy
1 2 3 4 5 6 7 8 9 10 11 |
########################################################################### #SOURCE DEST POLICY LOG LIMIT:BURST # LEVEL net $FW DROP info net nat64 DROP info $FW net ACCEPT $FW nat64 ACCEPT nat64 net ACCEPT nat64 $FW ACCEPT # CATCH ALL all all REJECT info |
Shorewall6 configuration
You need to make Shorewall6 aware of the nat64 interface, as IPv6 traffic needs to go to and from it.
These are the changes i made to the Shorewall6 (IPv6) configurations:
Zones
1 2 3 4 5 |
#ZONE DISPLAY COMMENTS fw firewall net ipv6 loc ipv6 nat64 ipv6 |
Interfaces
1 2 3 4 |
#ZONE INTERFACE BROADCAST OPTIONS net sixxs detect tcpflags,nosmurfs,forward=1 loc eth1 detect tcpflags,forward=1 nat64 nat64 detect tcpflags,forward=1 |
Policy
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
########################################################################### #SOURCE DEST POLICY LOG LIMIT:BURST # LEVEL $FW net ACCEPT $FW loc ACCEPT $FW nat64 ACCEPT loc net ACCEPT loc $FW ACCEPT loc nat64 ACCEPT nat64 net ACCEPT nat64 $FW ACCEPT nat64 loc ACCEPT net $FW DROP info net loc DROP info net nat64 DROP info # CATCH ALL all all REJECT info |
That should do it. If you restart the services (bind9, radvd, tayga, shorewall and shorewall6), your gateway is ready to provide a IPv6 only network with connectivity to internet, both IPv6 (through the Sixxs tunnel) and IPv4 (through your ISP’s connection).
Clients
You do need to prepare your clients to work on this network. This setup has been created with Linux clients in mind, but i’ll try and give an overview of what needs to be changed if you want to support Windows (7/8) and Mac OS X.
Linux clients
Linux clients need to install the RDNSSD daemon. This daemon will use the RDNSS information provided by radvd and change the /etc/resolv.conf file to include it. This way you won’t need to configure the resolv.conf yourself.
If you do not want to use the RDNSSD daemon, you will have to change your setup and use DHCPv6, which would mean you have to install the DHCPv6 daemon on the gateway and configure it, and install the DHCPv6 client on the client (which is in most cases part of the standard DHCP client of your distribution).
Mac OS X
From Mac OS X Lion on, Mac OS X is able to accept the RDNSS information from radvd. DHCPv6 is also supported. So there is no need for further configuration.
Windows
Windows Vista, 7 & 8 do not have support for RDNSS, you can provide this by installing rdnssd-win32. They do support DHCPv6, however, so it might be easier to just configure that.
Windows XP is just unable to use IPv6 without installing Dibbler. This tool provides Windows XP with DHCPv6 support.
Final thoughts
I kept my configuration simple on purpose, so if you’d like to add complex rules and policies to Shorewall(6) to protect your network, you can do so as you normally would. The only thing to remember is the flow of traffic for IPv6-Only clients to a IPv4-Only server:
1 2 3 4 5 6 7 8 9 10 11 12 |
Client IPv6 -> DNS call target (IPv4-only) hostname -> GW IPv6 GW IPv6 -> DNS result (IPv6 address linked to IPv4 target server) -> Client IPv6 Client IPv6 -> packet -> GW IPv6 GW IPv6 -> routes to NAT64 -> GW NAT64 IPv6 GW NAT64 IPv6 -> NAT64 processing -> GW NAT64 IPv4 GW NAT64 IPv4 -> uses standard NAT -> GW IPv4 (external) GW IPv4 (external) -> packet -> Target IPv4 Target IPv4 -> reply -> GW IPv4 (external) GW IPv4 (external) -> Reverses NAT -> GW NAT64 IPv4 GW NAT64 IPv4 -> reverse NAT64 processing -> GW NAT64 IPv6 GW NAT64 IPv6 -> routes to GW (internal interface) -> GW IPv6 GW IPv6 -> packet -> Client IPv6 |
You need to remember that flow, restricting any traffic between (in this articles case) eth0 and nat64 and eth1 and net64 can break your nat64 setup.

Upgrading Redmine from 0.9.x to 1.3.x
Today i was asked by a client to upgrade his Redmine setup from version 0.9.3 to the latest stable version. As his server is running Ubuntu 10.04 LTS and he installed Redmine using the Ubuntu repositories, this wasn’t the easiest or smoothest task i’ve ever done.
I suspected that upgrading from 0.9.3 directly to 1.3-stable would be a nightmare, so i was planning on upgrading to 1.0-stable, then to 1.1-stable, then 1.2-stable and finally 1.3-stable.
All looked well, and i started taking backups. Of course, the whole Redmine installation was setup using the Ubuntu repositories, so clearly, all of the ruby, rails, rake and rubygems packages were from Ubuntu 10.04. Which would cause a bit of a problem, as i needed newer versions of certain packages. So i ran the following commands (after taking backups of course.) to remove redmine and all it’s dependencies and reinstall whatever is necessary to run rubygems, rake and libapache2-mod-passenger.
1 2 |
aptitude remove redmine aptitude install rubygems rake libapache2-mod-passenger |
This made sure i had a basic setup. After that i used svn to get all the versions
1 2 3 4 |
svn co http://redmine.rubyforge.org/svn/branches/1.0-stable redmine-1.0 svn co http://redmine.rubyforge.org/svn/branches/1.1-stable redmine-1.1 svn co http://redmine.rubyforge.org/svn/branches/1.2-stable redmine-1.2 svn co http://redmine.rubyforge.org/svn/branches/1.3-stable redmine-1.3 |
This gave me the opportunity to just go into the appropriate directory, follow the guide on Redmine.org on how to upgrade and do that for each major release. On occasion rake would throw me an exception stating i needed a gem installed or a newer version installed, but overall that wasn’t a real problem. Just make sure you run
1 |
gem install -v=<correct version> <gemname> |
This all went good until i got to the point of upgrading to 1.3. Apparently version 1.3 needs a later release off rubygems. Whenever i tried running rake to do the db migration, i ended up getting the following error:
1 2 |
rake aborted! super: no superclass method `requirement' for <rails::gemdependency:0x7f9e87ea7a18>; |
This is a little problematic as the Ubuntu 10.04 LTS repository does not allow for a newer version to be installed from it. My solution was to just get the package from a newer distribution using a backport available thanks to a Ubuntu PPA from Mackenzie Morgan (Thanks!):
1 2 3 |
add-apt-repository ppa:maco.m/ruby aptitude update aptitude install rubygems |
This installs a version which can be used by Redmine 1.3-stable. After that, everything looked great, i could log in, i checked the configuration and it worked!
Or i thought… Whenever i opened the issues list of the project, i would get a Server error (500), the log reflected this:
1 2 3 4 5 6 7 8 |
ActionView::TemplateError (undefined method `-' for nil:NilClass) on line #28 of app/views/issues/_list.html.erb: 25: 26: < % previous_group = group %> 27: < % end %> 28: <tr id="issue-<%= issue.id %>" class="hascontextmenu < %= cycle('odd', 'even') %> < %= issue.css_classes %> < %= level > 0 ? "idnt idnt-#{level}" : nil %>"> 29: <td class="checkbox hide-when-print">< %= check_box_tag("ids[]", issue.id, false, :id => nil) %></td> 30: <td class="id">< %= link_to issue.id, :controller => 'issues', :action => 'show', :id => issue %></td> 31: < % query.columns.each do |column| %>< %= content_tag 'td', column_content(column, issue), :class => column.css_classes %>< % end %> |
Googling the error returned me to a bug report from version 1.0, which presented me with a simple solution, i just had to run a simple query on the database:
1 |
update issues set parent_id = NULL, root_id = id, lft = 1, rgt = 2; |