VIA SCSI & RAID Devices Driver



  • Sgsatidentify (8) - send ATA IDENTIFY DEVICE command via SCSI to ATA; sgsatphyevent (8) - use ATA READ LOG EXT via a SAT pass-through to fetch; sgsatsetfeatures (8) - use ATA SET FEATURES command via a SCSI to ATA; sgscan (8) - scans sg devices (or SCSI/ATAPI/ATA devices) and prints; sgsenddiag (8) - performs a SCSI SEND DIAGNOSTIC.
  • The lsscsi command is a handy tool to get all sort of information. You can find information about various transport being used in the system such as ATA, Fibre channel (FC), IEEE 1394 (SBP), iSCSI: target only, SCSI Parallel Interface (SPI), Serial Attached SCSI (SAS), SATA, and USB.

I have been involved in creating High Availability solutions for our NightWatchman Enterprise products and as part of that work have recently worked with Windows Failover Clustering and NLB clustering in Windows Server 2012. I thought it might be good to share my experiences with implementing these technologies. I’ve built multi-node failover clusters for SQL Server, Hyper-V, various applications, and Network Load Balancing clusters. The smallest clusters were two node SQL Server clusters. I’ve built 5 node Hyper-V clusters. The largest clusters I’ve built were VMware clusters with more than twice that number of nodes.

I will cover the creation of a simple two-node Active/Passive Windows Failover Cluster. In order to create such a cluster it is necessary to have a few items in place, so I figured I should start at the beginning. This blog will address connection to shared storage using the iSCSI initiator. The other installments in this series will be:

Scsiid is primarily for use by other utilities such as udev that require a unique SCSI identifier. By default all devices are assumed black listed, the -whitelisted option must be specified on the command line or in the config file for any useful behaviour. SCSI commands are sent directly to the device via the SGIO ioctl interface.

Via Scsi & Raid Devices Driver Updater

Clusters need to have shared storage and that is why I am starting with this topic. You can’t build a cluster without having some sort of shared storage in place. For my lab I built out a SAN solution. Two popular methods of connecting to shared disk are iSCSI or FibreChannel (yes, that is spelled correctly). I used iSCSI, mainly because I was using a virtual SAN appliance running FreeNAS that I had created on my vmWare host and iSCSI was the only real option. In a production environment you may want to use FibreChannel for greater performance.

To start I built two machines with Windows Server 2012 and added three network adapters to these prospective cluster nodes. One is for network communications and is on the LAN. The second was for iSCSI connectivity (other methods can be used to connect to shared storage but I used iSCSI as mentioned above). The third was for the cluster Heartbeat network. It is the Heartbeat the monitors cluster node availability so that a failover can be triggered when a node hosting resources becomes unavailable. These three network adapters are all connected to three separate networks and the iSCSI and Heartbeat NICs were attached to isolated segments dedicated to those specific types of communication.

When you are done building out the servers they should be exactly the same. This is because the resources hosted by the cluster (disks, networks, applications) may be hosted by any of the nodes in the cluster and any failover of resources needs to be predictable. If an application requiring .NET Framework 4.5 is running on a node that has .NET Framework 4.5 installed and the other node in the cluster does not then the application will not be able to run if it has to fail over to the other node, rendering your cluster useless.

Configure the network interfaces on the systems. Ensure that they can communicate with the other interfaces on the same network segment (including the shared storage that the iSCSI network will connect to). On the LAN segment include the Gateway and DNS settings because this network is where all normal communications occur and it will need to be routed through your network. For the iSCSI and Heartbeat networks you only need to provide the IP address and the subnet mask. I used a subnet mask of 255.255.255.240 on the iSCSI and Heartbeat networks in my lab setup since there were only two machines. Companies often use very limited subnet sizes for those networks. You do not need to provide a gateway or a DNS server since these networks are single, isolated subnets and their communication will not be routed to any other subnet.

The steps I have used to configure iSCSI are as follows:

VIA SCSI & RAID Devices driver
StepDescription
Run the iSCSI Initiator from the Server 2012 Apps page
Click Yes to have the iSCSI initiator service start and set it to start Automatically each time the computer restarts. You want this to start automatically so that the cluster nodes will be able to automatically connect to the shared storage when they reboot.
The iSCSI Initiator open to the Targets page. Select the Discovery page to continue.
There will be no target portals listed when you first open the page. Click the Discover Portal button.
Enter the IP Address of the shared storage iSCSI interface in the IP Address or DNS Name box. The port should be automatically set to 3260. Click Advanced.
In the Advanced dialog select Microsoft iSCSI Initiator from the Local Adapter dropdown box and select the IP Address of the NIC that is dedicated to your iSCSI connection in the Initiator IP box. Click OK to close the Advanced Settings dialog and click OK to close the Discover Target Portal dialog.
At this point the target portals list on the Discovery page will contain the shared storage you configured as shown in the previous step.Now you need to connect to LUNs on the shared storage.
Next you need to switch to the Targets page. Here you will see the LUNs that are available for you to connect to. Select the LUNs you wish to connect to and click the Connect button. It is a good practice to name your LUNs in such a way that they can be easily identified. In this example I have two sets of LUNs available, one for a SQL cluster and one for an App cluster. Both sets contain LUNs to serve as data and quorum disks on my cluster nodes.
The Targets page will look something like the example shown here. At this point you are connected to your iSCSI target.
If you switch to the Favorite Targets page you should now see a list of the LUNs on the iSCSI target you connected to. The LUNs listed here will be automatically connected whenever the machine is rebooted.

Via Scsi & Raid Devices Driver Download

The next installment will be on configuring these new disks in the OS as you continue preparing to create your cluster. Until then, I wish you all a great day!

-->

Microsoft provides a SCSI Port driver as a standard feature of the Microsoft Windows storage architecture. The SCSI Port driver streamlines the Windows storage subsystem by emulating a simplified SCSI adapter. Storage class drivers load on top of the port driver. This means that you can write storage class drivers for Windows with minimal concern for the unique hardware features of each SCSI adapter.

The emulation capabilities of the SCSI Port driver also allow you to develop minidrivers that are much simpler to design and code than a monolithic port driver. In other words, using the SCSI Port driver allows you to focus on developing a miniport driver that handles the particular features of your adapter.

To use the SCSI Port support routines, link to one of the SCSI Port support libraries, scsiport.lib or scsiwmi.lib. These SCSI Port libraries handle all interaction between the miniport driver and the hardware abstraction layers (HAL) of the operating system. Miniport drivers must not link directly to the HAL support library, hal.lib, nor should they link directly to the ntoskrnl.lib or libcntpr.lib support libraries. SCSI miniport drivers that do so are not eligible for a Windows logo.

The following sections examine the key features of the SCSI Port driver.

Driver

A general discussion of SCSI Port miniport drivers is provided in SCSI Miniport Drivers.

The Windows storage architecture also provides the Storport Driver, the recommended alternative to SCSI Port for high-performance devices.