Our Products
   Board-Level Products
   System-Level Products
   Remote Monitoring
     and Control
Product Support
   User Manuals and Software Drivers
   Product Photo Gallery
   RMA and Software Support
   Standard Terms & Conditions
   Software Library
About C˛I˛ Systems
   Company Profile (pdf)
   Real-Time House (pdf)
   Production Facility (pdf)
   Company Capabilities
   Company Services
   C˛I˛ Systems Management
   Press Releases
R & D
Press Releases

Return Material Authorisation  (RMA)

Search C˛I˛ Systems using Google :


   Products       Presentations       Search    Where We Are Contact Us    Home   
The 'Other' Y2K Problem
Publication: C˛I˛ Press Release Issued: Date: 1999-10-01 Reporter: C˛I˛ Systems

C˛I˛ Press Release

By Louis van Alphen, CEO, Basix Automation


Obscured among media reports of ‘obvious’ computer systems affected by the Year 2000 problem (Y2K), such as PCs, servers and networks, are lower-profile – by no means lower-importance – ‘embedded’ systems such as manufacturing and process control, medical equipment, transport management systems etc. that often are heavily time-dependent and therefore particularly vulnerable to the Y2K crisis.

These ‘invisible’ embedded systems, of which over 700-million are estimated to be manufactured and sold each year (over and above the installed base of many billions of devices), constitute the ‘other’ Y2K problem.

Y2K, as is now widely known, stems from most older (and many modern) systems’ use of microprocessors designed with a two-digit, rather than four-digit, code to represent the year. Thus 1999 is held as 99, 1918 as 18 and so on. At the turn of the new century, less than 6 months away, the date will roll over to ‘00’, which could be misinterpreted by some software calculations as 1900 (or some other date) instead of 2000, with possibly severe consequences.

‘Obvious’ computer systems – PCs, servers etc. – are microprocessor based and therefore, obviously, are vulnerable to Y2K.

However, so too are the multitudes of microchip-based embedded devices, which process a fixed set of programming instructions to manage the operation of electro-mechanical equipment or machinery. Often hidden away or buried in machinery control cabinets, embedded systems typically perform time-critical process-control type tasks.

Embedded systems may be affected by Y2K in three main areas:

1.  The Real Time Clock (RTC)

This is often a small crystal-controlled chip within the device that stores the current time/date over a long period (thanks to battery backup). Software refers to the RTC on power-up and periodically thereafter to establish current time/date. Earlier RTCs were limited to a 2-digit year field, which implies that software has to interpret that ‘99’ means 1999 and ‘00’ means 2000 (not 1900).

The Y2K problem lies not so much in the shortened date-field itself, but rather in the method used by the software to do date calculation. If the method used does not calculate date/time differences correctly, an upgrade/replacement will be necessary. It is estimated that less than 20% of embedded systems worldwide use RTCs.

2.  Operating system

The operating system (OS) of an embedded system rarely is transparent to the user in the way, say, Windows is to the desktop PC user. Often the embedded system’s OS is ‘burned’ as ‘firmware’ onto a read-only memory chip and all but invisible except for a few variable settings (e.g. travel limits, max/min temperatures, rpm etc.) that often are stored in a separate battery-backed memory module.

In ‘black box’ type systems it can be very hard to determine whether the date/time is used at all. Moreover, the date could be embedded in the system such that even experienced technicians have trouble ascertaining whether or not there is a date-dependency, and hence a Y2K problem, within the system.

3.  Application software

This is the main program controlling the functions of the particular device, and normally consists of instructions entered/created either by the manufacturer or the shop-floor engineer/operator on-site often making the device customised to specific requirements.

Some control systems, such as smaller programmable logic controllers (PLCs), can be quite simple, while others are very complex with many commands available to the programmer. Any number of commands may use time/date functions so it is necessary, from a Y2K-compliance point of view, to scrutinize the program logic for user-defined date-dependencies (and not simply take the manufacturer’s word that the controller is Y2K-ready).

It is clear from the above that there is no simple or quick way to determine the Y2K-compliance of embedded systems.

It has been calculated that only 1% of embedded systems are affected by the ‘millennium bug’. But the question is: which 1%?

Put another way: of 100 controllers in a particular plant, which one is at risk? And how critical is it to the overall operation of the plant? What effect(s) will malfunction/failure of an embedded device have? Could the device operate incorrectly and yet not cause disruption? If so, should the device be left as is or replaced/upgraded?

Often the effects of failure are negligible, but sometimes they can be catastrophic. Hawaiian Electric Company earlier determined that its energy management system (EMS), which remotely controls transmission system breakers, coordinates power generation schedules, compensates for large transmission line breaks and provides protection against voltage/current/frequency transients, would have crashed on the rollover to 01.01.2000 if not upgraded. This would have sent the utility’s transmission network crashing, causing a major power outage and loss of all generating capacity.

Few embedded system failures will have dire effects. But company executives and production managers have to ask themselves this question: are we, or are we not, willing to take the risk? Should we simply do nothing and keep the machinery/plant/rig running, trusting that the consequence will not be an operational shutdown or, worse, major catastrophe?

An alternative might be to do nothing except plan a shutdown at the end of December 1999. But this could lead to restart-difficulties or, in extreme cases, failure to start, accompanied by possible severe health, safety and environmental problems.

There is, I believe, only one practical and responsible course of action (from legal, business and environmental perspectives), and that is to tackle the problem head-on, even at this relatively advanced stage (bear in mind 01.01.2000 is not a cut-off date, as is widely believed: Y2K problems will be experienced long before this date, and could continue for a number of years thereafter).

A decisive start therefore is required, taking advantage of the generic, widely adopted ‘Y2K framework/methodology’ developed and refined over the past 4-5 years. This approach encompasses the following phases, some of which typically are actioned in tandem:

1. Awareness

  • Raise awareness of the problem at appropriate levels.
  • Secure senior management buy-in and support.

2. Assessment, inventory and analysis

  • Compile full inventory of microprocessor-depended systems.
  • Ascertain the criticality of each relative to the organization’s ability to continue its business unaffected.
  • Assess the risk (probability of failure vs. impact of failure) for each critical system

3. Planning and scheduling

  • Based on (2), plan to address critical systems first.
  • Ensure resources will be available at the right time for long lead-time items.

4. Renovation and remediation

  • Fix (refurbish/replace/bypass) the problems in order of priority.

5. Validation (testing)

  • Check each remediation individually and separately.
  • If problem-free, check each remediation in context of the larger process(es).

It’s worth dwelling, for a moment, on points 5 and 7, which relate to testing. Clearly this is a critical stage of the Y2K project, all the more so because the process of testing itself introduces an element of risk that needs of be managed with care.

Experience shows that the following aspects must be considered during the test phase:

  • Scope of the test must be closely defined (is it the process as a whole being assessed, or individual components?).
  • Systems and subsystems to be tested must be correctly identified.
  • Function(s) of the component(s) within the process must be clearly understood in order to develop and apply meaningful tests.
  • Test must be properly executed, with care being taken to obtain the right approvals and minimize risk while planning for any eventuality.
  • Finally, results must be correctly interpreted. There is no substitute for experience in this regard.

Critical related issues are supply chain management (collection of information about external suppliers with a view to assess their Y2K readiness) and contingency planning (identification of contingency plans to minimize/eliminate the impact of previously-identified business risks).

Basix Automation, because of its considerable system engineering, software development and industrial control experience/expertise, has been involved in a number of Y2K projects involving high numbers of embedded systems. Customers include Cape Metropolitan Council, East London Transitional Local Council and Northern Province Provincial Administration, who have retained Basix under the auspices of IBM to project-manage, co-ordinate and execute various Y2K activities.

In each case an important feature of Basix’s involvement has been the design and population of an extensive Y2K database, incorporating all relevant information (progress, actions taken, issues discussed/resolved etc. etc.) for each site or system. This comprehensive information resource acts as both a powerful administration aid during the course of the project and, on completion of the programme, a longer-term logistics, decision support, planning and management tool for the customer organization.