D-Link DGE-530T–Different Revisions, Different Drivers

Posted June 3, 2013 by thefrugaladmin
Categories: Computers and Internet

Tags: ,

I recently ran into an issue trying to install drivers for a D-Link DGE-530T Gigabit NIC in a Windows Hyper-V Server 2012 test box. I discovered that different revisions of the card use completely different chipsets. Drivers for the DGE-530T are NOT available out of the box in Windows Server 2008 R2 or Windows Server 2012. Unfortunately, the drivers which are currently available from the D-Link.ca support site only appear to support the latest revision – Revision “C”. Revision “C” uses a Realtek chipset, but Revision “B” of the Card uses a Marvel chipset. The latest driver file from D-Link (Version 10.00)  doesn’t have the Marvel Drivers.

I have several Revision B2 cards. I found the drivers for them in the Versions 8.00 driver file which I had to get from the D-Link USA support site.  I used the drivers in the “\dge530T_drivers_20080901\Windows 2008 64 driver” folder of the zip file.

FWIW I tried a Revision B2 Card with Windows 7 – the drivers were automatically installed (presumably from Windows Update). The DGE-530T is NOT a server class NIC so it’s not surprising that the drivers aren’t available out of the box for a server O/S, even though a number of other “desktop class” NIC drivers are present.

I will post a more detailed description of the process I went through to get these drivers installed in another post. For now, hopefully this will save somebody the aggravation I went through trying to get this card working.

Follow-up Questions from my ITPro Toronto Restore-a-Palooza Presentation.

Posted April 15, 2012 by thefrugaladmin
Categories: Computers and Internet

I wanted to follow up on a few questions that came up during my Restore-a-Palooza Presentation to the ITPro Toronto user group on Tuesday April 10th. Special thanks to Michael Suthern for keeping track of them for me.

1. Number of clients backed up in each version of the server OS’s?

Windows Home Server(WHS) 2011 – 10

Windows Storage Server 2008 R2 Essentials (WSSE) – 25

Windows Small Business Server (SBS) Essentials 2011 – 25

2. Use of any ‘Colorado’ server in an existing domain to backup a small number of workstations?

If it is less than 10 workstations and you only want to use the server for client backup, then WHS 2011 will do the job nicely. However, if there are more than 10 PCs to back up (or soon could be more than 10) then you would want to look at WSSE.

3. What happens when a backup is interrupted for any reason?

From Microsoft Windows Home Server 2011 Unleashed: In previous versions of Windows Home Server, if a backup was interrupted, it just failed, and a completely new backup wouldn’t start until the next scheduled backup time. Windows Home Server 2011 now tracks the client backup process, and if the backup is interrupted for any reason (for example, losing the network connection), Windows Home Server 2011 resumes the backup from where the interruption occurred.

4. Is a restore possible to different hardware?

It is possible. It might require additional steps, depending on the O/S. For example, with Windows XP, you might have to adjust the boot.ini file and/or do a repair install of Windows XP to install the correct drivers for the new hardware. Windows Vista and Windows 7 are typically much more tolerant of hardware changes. The hardware change would almost certainly trigger the need to re-activate Windows. This might not be permitted by the EULA, depending on the version of Windows that was originally installed.

5. Will any of the servers backup an SBS 2003 server?

No. The “Colorado” servers will only back up a currently supported client O/S. The connector will not install on a server O/S. The only exception is Windows Multipoint Server. You could install the connector for WHS version 1 on a server O/S and back it up, however this was not a supported use of the product.

6. Can you change the remote access ports on any of the servers?

Not really. I understand that the problem appears to be that port 443 is hard coded into the remote web access site. You can get it partially working by using the port forwarding features of some routers. So for example, you could forward port 4443 on the internet side to port 443 on the WHS server. Then go to https://fqdn-of-the-server:4443/remote This would give you access to the file shares but you won’t be able to connect to computers using the Remote Desktop Options.

7. Any favorite add-ins for these servers, and where do you get them?

I don’t have any favorite add-ins at the moment. Most of the add-ins I used for WHS Version 1 were for features that have been built into the Colorado servers. But if you’re looking for add-ins, here are some places to look:

http://windows.microsoft.com/en-US/windows/products/windows-home-server/customize

http://www.mswhs.com/category/add-ins/

http://www.wegotserved.com/category/add-ins/windows-home-server-2011-add-ins/

http://homeservershow.com/forums/index.php?/forum/43-whs-2011-add-ins/

ASR Recovery must be Activated on First Boot

Posted December 28, 2011 by thefrugaladmin
Categories: Computers and Internet

After performing an ASR recovery of a Windows Server 2003 O/S, the first time you attempt to log into the server, you may receive a prompt to re-activate Windows. If you can’t or don’t want to activate Windows right away, you can reset the Windows Activation so that you can at least log into the server.

As part of the Disaster Recovery Planning that I do for my clients, I do a full recovery of their server to a test environment to make sure that the server can be recovered and to document any issues that may arise. That way, in the event of a real emergency, I’ll be prepared to perform the recovery as quickly and as smoothly as possible.

During a recent recovery test for a client, I was restoring a server running a Volume License version of Windows Server 2003 R2 x64 and Exchange 2007. It was running on a Dell PE1950 server and I was restoring it to very dissimilar hardware. The machine I was using was just a desktop PC that had the equivalent ram and hard drive space as the Dell. I was documenting the steps I went through and preparing a checklist of the steps necessary. This unit wasn’t going into production so I wasn’t concerned that the hardware wasn’t up to server specs. But even if I was restoring to new server class hardware, I’m pretty sure I would have run into the same Activation issues. Even restoring to a virtual environment has given me similar results.

In my opinion, if you don’t have a 3rd party backup solution that performs an image based backup of the server, the best and fastest way to get a Windows Server 2003 based O/S back up and running is with an Automated Server Recovery (ASR) restore. I use that to restore the O/S and system files and then use the latest available backup to restore the data. If any applications are installed on the system partition, they usually come along as part of the system files, but they could be restored from backup as well. That’s a lot less work than installing the O/S from scratch, patching and re-installing all the apps.

Running an ASR backup is fairly simple. Open NTBackup and step through the ASR wizard. The only thing that complicates the process these days is that an ASR backup requires a floppy disk – not something you find that often on newer hardware. I use a USB floppy drive for machines that don’t have one. (For example, this Dell PE1950 didn’t). I like to update the ASR backup of each server at least 3 or 4 times a year. Otherwise, you may have to run a lot of updates to get the machine up to the same patch level as your most recent backup.

Anyway, I ran the test ASR restore of this particular server and after it completed, the server rebooted. When the Ctrl-Alt-Del prompt finally came up, I entered the local administrator credentials. Before I got to the desktop, I was presented with a dialogue box that stated that the hardware had changed since Windows was first installed and that I needed to re-activate windows to continue. That wasn’t going to happen, even if I wanted it to. I couldn’t do an internet based activation because the NIC drivers hadn’t been installed yet. I suppose I could have tried a telephone activation, but I didn’t want to replace the original server – I just wanted to test the restore.

This seems like a bit of catch 22 situation. You can’t log into Windows until you re-activate and you can’t re-activate until you log into Windows and configure the NIC. Restarting in safe mode isn’t much help because you can’t install drivers in safe-mode.

The way around this is to reset the Windows Activation – essentially return Windows to an un-activated state. After doing that, you have the traditional 30 days to activate Windows – just like you do with a new install. This is more than enough time for my recovery testing procedure. And in the event of a real emergency, it would get you up and running again and give you enough time to sort out the licensing. Since this particular server was running a volume license copy, it would have been permissible to transfer the license to new hardware. If it was an OEM copy, I would still have 30 days to purchase a new copy of Windows Server and get the machine properly licensed.

The procedure to reset the activation is surprisingly simple.

  1. Reboot the server in safe mode.
  2. Log in with the local administrator credentials.
  3. open a command prompt
  4. enter the command – “rundll32.exe syssetup,SetupOobeBnk” without the quotes to reset Windows activation.
  5. Reboot.

When the server reboots, Windows will be un-activated. You will now have 30 days to activate, just like you would with a fresh install. You can log in, install drivers and continue to restore the data.

This trick gave me enough time to complete my recovery testing. If this was a real Disaster Recovery instead of a test and I was going to put the recovered server into production, I would still need to activate Windows with a valid license. But I could do it at a more convenient time after I got the server up and running and the users back to work.

Installing new network gear? Update that firmware first!

Posted December 13, 2011 by thefrugaladmin
Categories: Computers and Internet

One of my clients had a problem recently that turned out to be caused by a new network switch that was running old firmware. I’ll be checking for firmware updates of a lot more equipment from now on before I install it. Here’s why….

The client bought a new Dell PowerConnect 5324 Smart Switch as part of a new iSCSI SAN that they were implementing. They ran through the initial configuration to assign it an IP address and change the default password and then installed it in their server rack and started using it. The switch was only connected to the iSCSI network so they didn’t do any further configuration.

Shortly after getting everything configured and transferring data to the SAN, they started to experience intermittent short-term loss of connectivity to the iSCSI targets. Looking at the event logs of the Windows servers connected to the SAN showed the network cards on the iSCSI subnet were losing connectivity to the network. The error stated the the Network Link was down. Checking the event logs of the new switch revealed a Fatal Error had occurred and the switch had rebooted.

An internet search of the error message turned up that this was a known issue that had been corrected in version 2.0.0.36 of the firmware. The switch was running version 2.0.0.35 – one version older. They have another switch of the same model that was at least a year old. Interestingly, it was also running version 2.0.0.35 of the firmware.

Checking the Dell support web site, it turns out that at that point in time, version 2.0.0.46 of the firmware was available. So this brand new switch had arrived with firmware that appeared to be 10 versions old. It was a little surprising to me (and the client) that a brand new switch was running such old firmware.

Now to be fair, if you read the 255 page Dell manual that comes as a .pdf file on the CD that accompanies the switch, it clearly states that you should update the firmware before installing the switch. However, in my opinion, a sticker on the switch that emphasizes this important point would be very helpful.

Updating firmware is something that is often recommended by the manufacturers of servers and their components like Raid Cards, NICS, etc. I know Dell and HP release update disks that will help you with this. And I’ve also updated the firmware of routers and firewalls to correct issues and add features. But updating the firmware of a switch is not something that occurred to either my client or myself as being necessary. That is probably partly because they weren’t really using the smart features of those particular switches. The need to do that became painfully obvious to my client and it will be something I will add to my own practice when I’m installing new systems equipment.

It’s worth mentioning that PC and motherboard manufacturers seem less eager for you to update the firmware or bios of their products. Most of them have warnings on the support sections that basically state “don’t upgrade the bios if the PC is working”. I suspect those instructions are aimed at enthusiasts rather than IT professionals.

Updating the firmware on one of these switches is definitely worth doing before you get the unit installed in a rack. Other than the fact that the latest firmware can prevent some potential issues, it’s been my observation that these smart switches require a serial connection to the switch in order to access the command line interface to upload and select the new firmware. But a notebook with a serial port that you can take to the rack where the switch is installed is becoming increasingly hard to find. And getting to the serial port of  a switch in a rack can be physically challenging depending on where the port is located. At least if the switch is out of the rack, you can take it to a PC that has a serial port in order to do the upgrade.

I’ll do another post on what I went through to update the firmware on this and a couple of other switches at this client.

Monitor Changes to Non-default Resolution on reboot–caused by startup service

Posted September 20, 2011 by thefrugaladmin
Categories: Computers and Internet

Tags: , ,

I had an issue with a monitor changing resolution on reboot a little while ago. Hopefully anybody having a similar error will find this post useful as a guide to finding the issue.

I’ve been running dual monitors for some time. When I’m working on remote systems, it’s great to be able to have a remote system on one screen and my local PC on the other. Using 2 screens takes a bit of getting used to at first, but then you never want to go back.

At the beginning of 2011, I bought myself a new monitor. It was 24”, 16×9, 1920 x 1080, adjustable height and came with VGA, DVI and HDMI inputs. The other monitor I was using was an older 19”  4×3 monitor that I used as the primary screen. I got everything setup and was very happy with all the extra screen real estate I had.

But I noticed that every time I rebooted the PC (which wasn’t often at that time), the new monitor would end up at less than the default resolution. I had to go in and manually adjust the resolution up to 1920 x 1080 from the 1680 x 1050 that it kept defaulting to. I assumed it was a driver or hardware issue. I updated to the latest drivers for my video card and made sure I was using the proper .inf file for the monitor.

The problem didn’t go away.

Internet searches produced a lot of hits about people having trouble with monitor resolution changing after a reboot of Windows 7, but nothing conclusive about how to fix it. Most of the suggestions related to drivers which I had already tried. So I started to wonder if it was some kind of issue due to having a mix of 4×3 and 16×9 monitors, or maybe a problem with the video card.

So one day when I’d had enough, I decided to try and solve the problem once and for all. I changed video cards – Didn’t help. I went down to only one monitor (the new 24” Monitor) – it got worse! Now it was changing to 1280 x 1024. At this point it was difficult to keep blaming it on hardware or drivers.

One of the things I noticed with all this testing – when the PC first rebooted and the Welcome screen appeared, the resolution appeared to be OK. The after a couple of seconds, the resolution changed down to the other setting. This kind of got me thinking it could be a software problem. I had found one forum post from a couple of years ago where someone claimed that his issue was caused by Windows Live Messenger loading at boot. I don’t have messenger loading by default, but it at least pointed me to a startup program or service.

I loaded MSConfig and selected a diagnostic startup. This disables all startup programs and non-essential services.

image

When the PC rebooted, the resolution didn’t change. Hmmm!

The recommended procedure for this kind of troubleshooting is that you go back into msconfig and enable one item at a time, reboot and see if the problem appears. When it does, it’s likely that the last item you enabled caused the problem.

I don’t have that much patience. So I had a look at the various items to see if anything looked suspicious. One thing jumped out based on the forum post I mentioned – “Windows Live Mesh remote connections service”. I seem to recall that I installed windows Live Mesh somewhere around the same time as I got the new monitor.

Since a lot of stuff wasn’t running well with all those services and startup items disabled, I enabled all of them except Windows Live Mesh remote connections service and rebooted. Much to my delight, the monitor came up in the proper default resolution.

Just to be sure, I enabled Windows Live Mesh remote connections service and rebooted. The monitor changed resolution a couple of seconds after the Welcome screen appeared. I disabled the service again and have never looked back.

I use Windows Live Mesh on this PC but I don’t use it for Remote Access. So I never looked any further. But when I was putting this post together, I decided to have another look. I went into the services snap in and enabled Windows Live Mesh remote connections service. The resolution on my wide screen monitor changed to 1280 x 1024. (AT this point I have the widescreen as the primary monitor on the left and the 4×3 19”monitor on the right). The 19” monitor was running at it’s default resolution of 1280 x 1024.

Knowing the cause of the problem, I could do a more targeted internet search. I found an entry on the Microsoft Answers site that solved the issue. It pointed to the registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Live Mesh\Remote Desktop\DisplayDevices\DEVICE0. Not surprisingly, the key fields are width and height.

In my case, I had an entry for Device0 and Device1. Device0 was set for a width and height of 1280 x 1024 and Device1 was set for 1680 x 1050. As I recall, when I originally installed Live Mesh, I was running the 19’ as the Primary monitor and I had a 22’ that had a resolution of 1680 x1050. I edited the Device0 entry and changed the width to 1920 and the height to 1080.

resolution change 1

Then I did the device 1 entry. It was set for 1680 x 1050, but I guess since that was higher than the default resolution of the monitor, it stayed at it’s default. I changed it the width and height to 1280 and 1024.

resolution change 2

I also changed the “x” value from 1024 to 1980. I assume the x and y values are the offset for the second monitor which I have configured as an extension of the desktop to the right of the primary monitor. I haven’t been able to test this since I don’t have another setup with a similar monitor arrangement.

I’ve left the Windows Live Mesh remote connections service disabled since, as I mentioned before, I don’t use it for remote access to this PC. I did test starting and stopping the service several times and there is no resolution change now.

P2V Conversion of an SBS 2003 Server–Part Five

Posted April 10, 2011 by thefrugaladmin
Categories: Computers and Internet

Tags: , , ,

Part Five – Last Tweaks and We’re Running

This is the last post in a Five part series about my experience doing a P2V conversion of my SBS 2003 server to Hyper-V. 

Here are links to the other parts:

 Part One – Creating the VHD

 Part Two – Creating the VM

 Part Three – First Run Tweaking the Server VM

 Part Four – Second Run Tweaking the Server VM

Last Tweaks

When the server reboots for this third run, we again get warnings about service failures. We need to investigate those.

third run 1 service failure

When we log in, the Windows Product activation warning also appears. We’ll take care of that later. Click the No button.

first login 1b activation error

So the first thing to do after we get the server desktop is to open the services MMC and check to make sure the services that are set to open automatically are started. Perhaps we’ll find the cause of the service failure warning?

Click first in the status column and then in the start-up column. Any service that is set to start automatically but isn’t started will be at the top of the list. The only services that aren’t running are “Performance Logs and Alerts” and “Microsoft .net Framework 4”. That’s typical on an SBS 2003 box. So what caused the service failure warning? Time to check the event log.

third run 1a all services OK

Checking the System event log shows that the only failing service was the “Parallel port driver service”.

third run 1b check event logs

This is pretty common with modern servers, both physical and virtual. They just don’t put parallel ports in them any more. There are two ways to correct this, depending on whether you’re a GUI or command line type. In the GUI, open regedit and navigate to HKLM\SYSTEM\CurrentControlSet\Services\Parport. Change the value of the “Start” DWORD to “4”.

third run 1c parport svcs registry

Or from a command prompt type:

sc config parport start= disabled (make sure there is a space between “start=” and “disabled”).

third run 1d parport command

Configuring the SBS 2003 Backup

So other than the parallel port issue, all the regular  SBS 2003 services are started and running correctly. Now we need to turn out attention to the backup. I was previously using a USB hard drive connected to the server  as a destination for the built-in SBS Backup. You can’t connect a USB drive directly to a Hyper-V Guest. Windows Virtual PC has that capability, but not Hyper-V. However, you can attached the USB drive to the Host server, take it offline and then make it available to the guest O/S as a pass through disk.

After connecting the USB drive to the host Hyper-V server, open Server manager and select “Disk Management”. Locate the USB drive (in this case, it’s Disk 3). Right click on the left side of the disk and select “Offline”.

third run 2 take usb offline

Now go into Hyper-V Manager, select the Virtual Machine and then in the Action Pane, click “Settings”. When the settings dialog box opens, locate and click on the SCSI controller. Make sure “Hard Drive” is highlighted (it’s the only choice) and then click the ‘Add” button.

third run 2a add scsi hard drive

Select “Physical hard disk:” and then select the USB drive from the drop down list. Press the “OK” button.

third run 2aa add physical disk

I should point out that the ability to do this “hot add” of a SCSI Hard disk to a running VM is a function of Hyper-V R2. If you are running the non-R2 version of Hyper-V, you will have to shut down the machine in order to perform the step above.

So now that the USB Drive has been configured as a pass-through disk, we can open My Computer and confirm that it’s available to the O/S. It show up as Drive K: with a volume label of “Comstar250A”. That’s the drive letter and volume label it had when it was connected to the physical server.

third run 2b usb drive shows in VM

So now that the backup drive is present, we can run a backup and make sure it works. Open Server Management on the SBS Server, click on Backup on the left column and then click “Backup Now”. The SBS Backup starts running.

third run 2c test backup

Make sure the backup completes successfully.

Run the SBS 2003 BPA

As a further test to make sure the server is running properly, I ran the SBS 2003 Best Practices Analyzer. It was already installed on my server. It’s available for download from Microsoft here. The  results of the scan on my server were acceptable with only two warning items. Both of these can be safely ignored.

third run 2d sbs bpa scan

Activate the Server

Having confirmed that the server is running properly in the new virtual environment, it was time for me to activate the server. I am running the Action Pack version of SBS 2003, so moving the license to different hardware is allowed in the Eula. If this was an oem copy, it would probably still work, but it would be running in violation of the oem license agreement.

If this was only being done for a test run of a migration, the 3 days that you are allowed to run the server before having to activate it may be enough. I did a test migration to SBS 2008 last year using essentially the same technique and I managed to complete it with the 3 days.

Click on the Activation Icon in the bottom right corner of the screen.

third run 3 activation icon

The Activate Windows box opens. I chose to activate over the internet which is usually the easiest option.

third run 3 activation

I chose not to register with Microsoft.

third run 3a register

I have successfully activated my copy of Windows.

third run 3b activated

Conclusion

The server has performed flawlessly as a VM since it was converted in late February 2011. In fact, the performance seems better than it was on the Physical Hardware. This is despite the fact that I am still running with the dynamically expanding virtual hard drive that Disk2VHD created. I have read that the performance of a dynamically expanding VHD in Hyper-V R2 is not significantly different than a fixed size disk, but I haven’t tested this yet.

I am getting ready to test a migration to SBS 2011 using the Microsoft Migration method. Having the server in a virtual environment already means I can simply copy the VHD and spin up another VM in an isolated network to test the migration.

P2V Conversion of an SBS 2003 Server–Part Four

Posted April 10, 2011 by thefrugaladmin
Categories: Computers and Internet

Tags: , , ,

Part Four – Second Run Tweaking the Server VM

This is Part Four of a Five part series about my experience doing a P2V conversion of my SBS 2003 server to Hyper-V. 

Here are links to the other parts:

 Part One – Creating the VHD

 Part Two – Creating the VM

 Part Three – First Run Tweaking the Server VM

 Part Five – Last Tweaks and We’re Running

Second Run – Configuring the New Virtual Hardware

The server reboots and we see the warning about services failing to start. Click the OK button.

clip_image002

After logging in, the Windows Product Activation Warning comes up. Click the No button – we still aren’t ready to activate.

clip_image004

This time there are some different errors. UPSMON_Service.exe crashed and the display resolution is set very low. The UPSMON crash is caused by an incomplete removal of the UPS software. Don’t send the error report – we don’t have an internet connection yet. But go ahead and adjust the resolution.

clip_image005

To take care of the UPSMONService error, go into the services MMC and change the UPSMONService Startup type to disabled and then click the OK button.

clip_image007

Non Present Device Clean-Up

There will be a number of hardware devices that were part of the original physical hardware that are still listed in the Plug and Play Enumerator, even though they are no longer present. Most of them won’t really cause any issues but some of them can be a problem. In particular, the O/S still has the physical NIC(s) listed and there is probably still an IP address assigned to that hardware. If we are going to assign the same IP address to the virtual server that the physical server had, we’ll get a TCP/IP Warning telling us the IP address is already assigned.

clip_image008

These “Phantom NICs” may come back to bite us at some point in the future, so it’s a good idea to get rid of them. Open a command prompt and type the following two lines:

Set devmgr_show_nonpresent_devices=1

Start devmgmt.msc

clip_image010

It’s important to type both commands into the command prompt rather than just starting Device Manager from the GUI. That’s because the “devmgr” environment variable that we just set will only apply to programs that are launched from the command prompt environment.

When Device Manager opens, click on View and then “Show Hidden Devices”

clip_image012

Expand Network Adapters and we can see the Phantom NIC. Non-Present devices show with a fainter colored icon than devices which are present. Click on the Non-Present NIC and then click the Uninstall button.

clip_image014

Click the OK button in the Confirm Device Removal warning box. The RAS Async Adapter gave an error when I tried to remove it, so I left it there.

clip_image016

Now at this point, depending on how anal you are about having a clean device manager, you can continue removing other non-present devices in other categories. For example, there is actually only one Disk Drive present in the server at the moment. All the other entries are from the various internal and external hard drives as well as USB sticks that had been plugged into the physical server over its lifetime. I’m fairly anal so I removed most of them. It’s a bit of a tedious job since the non-present devices must be removed one at a time. When we’re finished (or fed up removing phantom devices one by one), close Device Manager.

clip_image018

Now that the Phantom NIC(s) are gone, Open the Properties of the Local Area Connection and the double click Internet Protocol (TCP/IP).

clip_image019

Enter the server’s Fixed IP Address information. In my case, this was the same address that the physical server had used. I had no reason to change it, and by using the same address for the virtual server that was replacing the physical server, I didn’t have to adjust the router.

clip_image020

Now that the Network card has the correct IP address, we can connect the virtual Machine to a virtual network so that it can communicate with the rest of the network. On the Virtual Machine Connection Window, Click File>Settings.

clip_image022

Click on the Network Adapter in the hardware column and then select the appropriate virtual network from the drop down list on the right. I have several virtual networks on my Hyper-V server, including some isolated networks I use for testing. I connected this machine to the production network since it was replacing my production server. Click the OK button.

clip_image024

To confirm network connectivity, we can ping some hosts on the network, or open Internet Explorer and confirm we have Internet Access.

clip_image026

At this point I chose to reboot the server. I probably could have kept going, but I wanted to give everything a chance to start up from a clean boot.

clip_image027

In Part Five, I’ll cover the Last Tweaks to the server O/S, as well as configuring and testing the SBS backup to run in a Virtual Environment.