Press "Enter" to skip to content

Posts published in “VMWare”

Make Permanent /dev/vmnet* for VMware Promiscuous mode

Vmware Workstation has an annoying issue with allowing promiscuous mode for ethx, unless you are running VMware as root (not a great idea). While you can simply change the permissions on /dev/vmnet* to allow rw-rw-rw but these changes go away each time your reboot.

A permanent solution and a bit more secure is to edit the vmware startup script to do it for you each time the system starts.

Simply add your user(s) to the group of your choice, in my case I’m using the adm group.

usermod -a -G adm username

Edit /etc/init.d/vmware find the section below and add the chgrp and chmod lines as below.

[code]
# Start the virtual ethernet kernel service
vmwareStartVmnet() {
vmwareLoadModule $vnet
"$BINDIR"/vmware-networks –start >> $VNETLIB_LOG 2>&1
#added follwing two lines to change perms on /dev/vmnet*
chgrp adm /dev/vmnet*
chmod g+rw /dev/vmnet*
}
[/code]

Now device files will be set for you each time the system comes up, and end the annoyance of having to change the perms each time.

Restarting the Management agents on ESXi

To restart the management agents on ESXi:
From the Direct Console User Interface (DCUI):

Connect to the console of your ESXi host.
Press F2 to customize the system.
Log in as root.
Use the Up/Down arrows to navigate to Restart Management Agents.

Note: In ESXi 4.1 and ESXi 5.0, 5.1 and 5.5, this option is available under Troubleshooting Options.

Press Enter.
Press F11 to restart the services.
When the service has been restarted, press Enter.
Press Esc to log out of the system.

From the Local Console or SSH:
Log in to SSH or Local console as root.
Run these commands:

/etc/init.d/hostd restart
/etc/init.d/vpxa restart

Note: In ESXi 4.x, run this command to restart the vpxa agent:

service vmware-vpxa restart

Alternatively:
To reset the management network on a specific VMkernel interface, by default vmk0, run the command:

esxcli network ip interface set -e false -i vmk0; esxcli network ip interface set -e true -i vmk0

Note: Using a semicolon (;) between the two commands ensures the VMkernel interface is disabled and then re-enabled in succession. If the management interface is not running on vmk0, change the above command according to the VMkernel interface used.

To restart all management agents on the host, run the command:

services.sh restart

Note: For more information about restarting the management service on an ESXi host, see Service mgmt-vmware restart may not restart hostd in ESX/ESXi (1005566).

Convert ESXi disk from thick to thin

When copying, cloning, and moving VM’s around in general any disks that were created with thin provisioning will unltimately be converted to thick provisioning. What a tremendous waste of disk space if you frequently over provision disk space and allow them to grow over time as needed. (Oh yeah that’s what thin provisioning was created for)

Let’s reduce the disk consumption and convert the vmdk’s back to thin (or to thin if you chose thick to begin with)

Ensure you have ssh enabled to your esx server and login as root or su to root from your user account.

Shut down the VM you wish to shrink (I’d suggest reconciling any snapshots you have and making a backup just in case something goes sideways)

Change directory to the path holding your VM, it will look something like /vmfs/volumes/53448b8c-b6d48f58-692a-ac220bdcff63/server_name (you may have to hunt down the right path)
for example I am going to shrink my vCenter Server Appliance which lives in
/vmfs/volumes/53448b8c-b6d48f58-692a-ac220bdcff63/VMware-vCenter-Server-Appliance-5.5.0.5201-1476389_OVF10

the directory looks like this:
# ls -ltrah
total 140495888
-rw-r–r– 1 root root 0 Apr 10 23:57 VMware-vCenter-Server-Appliance-5.5.0.5201-1476389_OVF10.vmsd
-rw-r–r– 1 root root 311 Apr 10 23:57 VMware-vCenter-Server-Appliance-5.5.0.5201-1476389_OVF10.vmxf
-rw——- 1 root root 547 Apr 11 00:05 VMware-vCenter-Server-Appliance-5.5.0.5201-1476389_OVF10.vmdk
-rw——- 1 root root 553 Apr 11 00:05 VMware-vCenter-Server-Appliance-5.5.0.5201-1476389_OVF10_1.vmdk
drwxr-xr-t 1 root root 1.6K Apr 11 01:49 ..
-rw——- 1 root root 100.0G Apr 12 01:37 VMware-vCenter-Server-Appliance-5.5.0.5201-1476389_OVF10_1-flat.vmdk
-rw——- 1 root root 8.5K Apr 12 01:37 VMware-vCenter-Server-Appliance-5.5.0.5201-1476389_OVF10.nvram
-rw——- 1 root root 25.0G Apr 12 01:37 VMware-vCenter-Server-Appliance-5.5.0.5201-1476389_OVF10-flat.vmdk
-rw-r–r– 1 root root 125.6K Apr 12 01:37 vmware.log
-rwxr-xr-x 1 root root 3.1K Apr 12 01:37 VMware-vCenter-Server-Appliance-5.5.0.5201-1476389_OVF10.vmx

As you can see the directory contains 125+G (I closed my terminal window with the actual du output.
But I know it’s using closer to 10G, so let’s shrink it down….

Notice there are two virtual disks ending with OVF10.vmdk & OVF10_1.vmdk

# vmkfstools -K ./VMware-vCenter-Server-Appliance-5.5.0.5201-1476389_OVF10.vmdk
vmfsDisk: 1, rdmDisk: 0, blockSize: 1048576
Hole Punching: 100% done.

# vmkfstools -K VMware-vCenter-Server-Appliance-5.5.0.5201-1476389_OVF10_1.vmdk
vmfsDisk: 1, rdmDisk: 0, blockSize: 1048576
Hole Punching: 100% done.

This may take a bit of time to complete depending on your disk speed etc…

End result looks the same but notice the actual usage:
# ls -ltrah
total 11077648
-rw-r–r– 1 root root 0 Apr 10 23:57 VMware-vCenter-Server-Appliance-5.5.0.5201-1476389_OVF10.vmsd
-rw-r–r– 1 root root 311 Apr 10 23:57 VMware-vCenter-Server-Appliance-5.5.0.5201-1476389_OVF10.vmxf
-rw——- 1 root root 547 Apr 11 00:05 VMware-vCenter-Server-Appliance-5.5.0.5201-1476389_OVF10.vmdk
-rw——- 1 root root 553 Apr 11 00:05 VMware-vCenter-Server-Appliance-5.5.0.5201-1476389_OVF10_1.vmdk
drwxr-xr-t 1 root root 1.6K Apr 11 01:49 ..
-rw——- 1 root root 100.0G Apr 12 01:37 VMware-vCenter-Server-Appliance-5.5.0.5201-1476389_OVF10_1-flat.vmdk
-rw——- 1 root root 8.5K Apr 12 01:37 VMware-vCenter-Server-Appliance-5.5.0.5201-1476389_OVF10.nvram
-rw——- 1 root root 25.0G Apr 12 01:37 VMware-vCenter-Server-Appliance-5.5.0.5201-1476389_OVF10-flat.vmdk
-rw-r–r– 1 root root 125.6K Apr 12 01:37 vmware.log
-rwxr-xr-x 1 root root 3.1K Apr 12 01:37 VMware-vCenter-Server-Appliance-5.5.0.5201-1476389_OVF10.vmx
drwxr-xr-x 1 root root 1.5K Apr 12 02:03 .

# du -hs
10.6G .

Much better!
Now restart your VM and move on to your next project

ESXi 5.5 Error 110: Connection timed out

Setting up ESXi 5.5 on a new box for my home lab:

Step 1) Install the hypervisor on it’s new hardware, yeah not so easy.  First run smooth as silk… only to find that VMWare had stripped the Realtek 81xx nic drivers…  rebuild install iso with drivers…check  Re-install…  as they say “That’s when the fight started”  Multiple attempts each resulting in Error 110: Connection timed out anywhere from 26% through 76% complete.  Maybe a bad burn (I was installing from CD), switch back to the first Realtek-less iso still errors, swap out the brand new CD/DVD drive (maybe it has issues), grab another brand new hard drive, nothing seemed to work.  Same thing with multiple flash drives.  Re-download the installer and try a few more times…  Just before removing what little remains of the hairs on my head I had a thought and powered down my workstation, dropped the CD in and the USB flashdrive (just for good measure I pulled the SATA cables from my drives, no need even allowing the remote chance of 4T of drives being blasted for no good reason).  Booted the CD and the installation zipped right through.

Now for the moment of truth, could it actually work on different hardware?  Booted right up and has been happy ever since.. <insert shocked face here>

My ESXi ‘server’
Asrock 970 Pro3 R2.0 mainboard
AMD FX-6300 Vishera 3.5GHz (4.1GHz Turbo) Socket AM3+ 95W 6-Core
8GB G.SKILL Ripjaws X Series 240-Pin DDR3 SDRAM DDR3 1600 (another 24G is coming soon)
Radeon HD 7750 Core Edition 1GB 128-bit GDDR5 PCI Express 3.0 x16  (WAY overkill, but will work great for pass-thru video later)
Boot Drive – 4G Kingston USB Thumbdrive (new from the parts drawer)
Datastores are currently being served via NFS (FreeNAS on similar hardware with 5 1T drives in RAID10 w/hot spare)

My Workstation that actually did the initial install
Asus M5A97 LE R2.0 mainboard
AMD FX-8350 Black Edition Vishera 4.0GHz (4.2GHz Turbo) Socket AM3+ 125W 8-Core
8GB G.SKILL Ripjaws X Series 240-Pin DDR3 SDRAM DDR3 1600
XFX Double D FX-787A-CDFC Radeon HD 7870 GHz Edition 2GB 256-Bit GDDR5 PCI Express 3.0 x16

The question remains, why would the Asrock system not complete the install to any media?  This board has been used by many and known to work quite well.  The Asus board on the other hand not so much.

Either way, I know many others have run into this Error 110: Connection timed out issue with all sorts of hardware…taking many many install attempts to get through the process.  Just for grins and to make sure I hadn’t just gotten lucky I build several more thumbdrives from my workstation and they were all successful on the first attempt.

Something is screwy with the Asrock system.  Now to find out why….

In any event if you encounter this problem repeatedly… you CAN install from another system and transplant the drive and it *should* work, given my limited testing so far.

Of course my patent-pending 50/50 guarantee applies, if you break your box in half… you get to keep both halves!