Our Products
   Board-Level Products
   System-Level Products
   Remote Monitoring
     and Control
Product Support
   User Manuals and Software Drivers
   Product Photo Gallery
   RMA and Software Support
   Standard Terms & Conditions
   Software Library
About CI Systems
   Company Profile (pdf)
   Real-Time House (pdf)
   Production Facility (pdf)
   Company Capabilities
   Company Services
   CI Systems Management
   Press Releases
R & D
Press Releases

Return Material Authorisation  (RMA)

Search CI Systems using Google :


   Products       Presentations       Search    Where We Are Contact Us    Home   
Optimising Network Performance with 64-bit/66 MHz PCI Bus Interfaces
Publication: CI Press Release Issued: Date: 1999-09-18 Reporter: CI Systems

CI Press Release


The SysKonnect SK-NET GE product line of Gigabit Ethernet server adapters represent a unique combination of high performance and fault-tolerance through the innovative implementation of failover redundancy. Whether single-port or dual-link, the SK-NET GE NIC provides the best in performance for high-end servers requiring maximum availability and reliability. Made possible by SysKonnect's unique RLMT (Redundant Link Management Technology), network designers and managers can have fail-safe Gigabit Ethernet configurations for their networks. Redundant NIC ports can be connected to a second Gigabit switch or repeater, thus enabling failovers from a bad cable segment, port, or even a complete switch or repeater.

The SK-NET GE server NICs support the most powerful implementation of the 64-bit/66 MHz PCI bus, as well as all other possible PCI bus configurations using 32 or 64 bits at 33 or 66 MHz clock speeds. The product line include 1000BASE-SX and 1000BASE-LX adapters offering both single network ports and redundant ports for the highest levels of reliability and fault-tolerance.

SysKonnect has also optimised these adapters for use in critical server environments by adding several features to reduce CPU utilisation and system load while maximising the performance and reliability of your server connectivity.

The heart of the SK-NET GE product line consists of a high-performance ASIC developed by SysKonnect. This is the primary device used to provide the connection between the PCI bus and MAC controller, i.e. the link between the host motherboard and the Gigabit Ethernet network.

Although the Gigabit Ethernet server adapters from SysKonnect were designed for the

64-bit/66 MHz PCI bus interface, they are also fully backward compatible with 32-bit systems operating at 33 MHz or 66 MHz clock speeds. The SK-NET GE can be characterised as a true Plug-and-Play adapter for all relevant PCI settings. The adapter sets itself automatically to operate in a 64-bit slot or in a 32-bit slot and can also auto-sense and initialise itself for a clock speed of 33 MHz or 66 MHz, for addressing in 32-bit or 64-bit mode and for signal voltages of

3,3 Volts or 5 Volts, respectively. As in the case of typical PCI bus adapters, the entire configuration of the PCI bus (I/O, IRQ, INT A, etc.) is handled automatically.

The 64-bit/66 MHz PCI bus is considered state-of-the-art in high-end server systems today. It has a theoretical bandwidth which is four times higher than conventional 32-bit/33 MHz systems. Given its potential capacity of 533 Mbyte/s, the PCI interface offers sufficient bandwidth to meet networking requirements over the long term. SK-NET GE adapters use this PCI potential to its full extent, ensuring that these server NICs are not the bottleneck in your system bus.

With wider data paths and increased clock speed, the network adapters can use host resources sparingly. Since all address paths are also 64-bits wide, the adapter can directly support more than 4 Gbytes of main memory per 64-bit address cycle. Since compatibility with 32-bit systems is also important for many of today's systems, the SK-NET GE adapters also support the 32-bit "dual address cycle" mode required by many existing systems.

To optimise performance and throughput, the SK NET GE supports PCI burst data transfers. This means several data blocks are grouped together and transferred from the adapter to system memory as a single contiguous unit. The advantage of supporting burst data transfers is reduced CPU overhead for data transmissions. Optimised burst size and PCI cache line size on the

SK-NET GE is 1 618 bytes. This is exactly the maximum packet length for Gigabit Ethernet, which enables optimum utilisation of the PCI burst mode and permits the adapter to be very efficient with the utilisation of PCI bus.

Onboard Memory

The large onboard memory of the SK-NET GE (1 Mbyte) was designed to allow a flexible architecture of send and receive buffers (FIFOs). This enables the optimum buffering of data in every configuration in order to achieve the maximum possible performance in each case.

The size of the receiver buffer is highly significant when the adapter is attached to a full-duplex switch. If communication between a full-duplex switch and an end device occurs using asymmetric flow control, the output of the full-duplex switch cannot be controlled by the end device. If the buffer were to overflow at the receiving station, the data would need to be retransmitted, resulting in significantly lower performance for the adapter. The large onboard memory of the SK-NET GE minimises the risk of such events ever occuring.

Bus-Mastering Mode with Packet/Fragment Descriptions

The SK-NET GE works as a DMA bus master on the PCI bus controlling read and write access to and from main server memory. The adapter is capable of transferring data as contiguous and sequential fragments on the PCI bus. This is achieved through the use of a descriptor ring which represents a concatenated list of pointers to the data fragments to be transferred. Once the descriptor ring has been set up, the concatenated list is processed fully and autonomously by the ASIC of the SK-NET GE. This offloads the data transfer from the host system and optimises the overall performance of the server system.

Intelligent I/O for Maximum Performance and Compatibility

The I/O system requirements for modern server systems go way beyond the usual File, Mail and Print services. New activities such as database queries, online transactions, multimedia applications and the explosive growth of intranets and the Internet have all contributed to the demand for increased throughput in server systems. In order to enable performance and compatibility improvements in the I/O behaviour or these systems, the I2O SIG (Intelligent Input/ Output Special Interest Group) has defined the I2O specification and architecture. Most leading manufacturers of servers and peripheral devices are members of this Steering Committee.

The main goal of the I2O specification is to increase system throughput by allowing time-consuming I/O tasks to be handled by dedicated I/O processors, or IOPs. This technique can increase the performance of a server considerably. A further I2O objective is to provide a structured environment for the development of intelligent and distributed I/O systems. The goal here is to achieve independence from hardware and software platforms, e.g. by providing a single device-specific driver, which can then run on different operating systems.

The hardware-independent I2O architecture is characterised by a multi-level driver model consisting of an operating system-dependent OSM (Operating System Specific Module) and a hardware-specific HDM (Hardware Device Module). The OSM runs in the main memory of the operating system, whereas the HDM, by contrast, runs on an I/O processor (IOP). Communication between the two models occurs by exchanging messages in the session layer.

Due to their interrupt-driven principle, I/O processes can place a very high load on the main processors and bus systems of servers. The I2O specification enables system resources to be used more efficiently, since the entire I/O traffic can be handled by the IOP, independent of the host CPU.

The simplest method of implementing I2O compatibility is to use an IOP on the system board of the server, or as an independent module on the PCI bus. The I2O driver of a network adapter would then run on that IOP, for example. The most efficient method that is likely to become commonplace in the future is to have a separate IOP on each peripheral device so that parallel I/O processes could be handled via different IOPs.

Please refer to www.syskonnect.com or www.ccii.co.za for more information.