This post contains some of my notes and comments on studying and experimenting with a classic computer virus called Lehigh.

It also describes the use of a DevOps platform to analyze and test the Lehigh virus in an automated way.

Personal computer viruses in 1986 and 1987

To put the Lehigh virus in context, it is perhaps best to take a historical look at this type of malware described by Solomon in the late 1980s. Alan Solomon managed to synthesize and share a brief history of personal computer (PC) viruses (1986-1993) in just a few lines.

These lines are great for putting that time in perspective, as well as identifying and understanding the technical milestones, geographic locations and the most relevant social triggers that influenced the development of PC viruses and antivirus in the industry to come.

The first two years are probably a key period where not only some of the most iconic viruses appear (Brain, Jerusalem, etc.) but also where narratives and metaphors between computer viruses and biological viruses begin to be built.

The Lehigh virus appeared in 1987 and, curiously, it was seen as an extremely unsuccessful and not very infectious virus.

Along with Solomon's lines, it is also interesting to read what Cohen comments in the point "2.3.3 The Lehigh virus" of his book "A Short Course on Computer Viruses". At this point he describes the context and environment where the Lehigh virus emerged.

Rise and Fall of the Virus at Lehigh University

Although the Lehigh virus appeared in 1987, perhaps the most complete description of the incident and its management was published two years later.

In 1989 Kenneth R. van Wyk, one of the people who was part of the team that managed the incident, signs an article titled "The Lehigh Virus" ( "Computers & Security" vol 8 issue 2 ).

In that article, Ken describes in detail the computing infrastructure of the University that was affected by the virus, as well as the operation of the service and the challenges they faced in stopping the infection.

The article also comments step by step on the differential analysis used to deal with the virus as well as the tools (mapmem, fc ...) they used to eliminate it and distribute a first antivirus program among affected users.

Ken also included in the article a pseudo-code representation that captured the logic of the virus.

      IF Another_Disk_Is_Being_Accessed THEN
         IF (The_Other_Disk_Is_Not_Infected AND
             The_Other_Disk_Is_Bootable) THEN
                 IF Hard_Disk THEN
                 IF Counter >= 4 THEN

Technical coverage of the virus

Since its appearance and elimination at Lehigh University, the virus is widely discussed in forums and technical lists of the moment. Most of the information shared about the virus at this time is limited and is intended to understand the virus at a high level so that it can be identified, controlled, and removed if necessary.

Although some professionals and institutions obtained a copy of the virus as a result of the incident, these copies were kept in private repositories. This changes in 1989 with a good line-by-line virus analysis from Joe Hirst.

Hirst doesn't just disassemble and analyze the virus. He also verifies that the virus can be rebuilt by reassembling it, providing quality source code through reverse engineering for the virus.

It is important to mention that this source code, although it contains all the original virus bits, requires modifications to initiate a first cycle of infection in the system. The changes are quite simple to make but without them any attempt to run the virus will be harmless and will break the system.

Finally in 1990, Harold Highland publishes his book "Computer Virus Handbook" that includes a series of articles that try to capture the most popular viral archetypes of that time.

In one of these articles titled "A history of computer viruses: the famous 'trio'" , Highland reviews the Lehigh virus and consults with two technical advisers. One of them is Ken van Wyk. The article was also published in Computers & Security Volume 16, Issue 5, 1997 .

The Highland article compiles all the information to date on the Lehigh virus adding technical value in the low-level reflections that both technical advisers make on the design of the virus and its implementation.

Both consultants dissect the virus and identify the weaknesses and limitations of the virus in detail.

How the virus works

The virus is housed in the default DOS command line interpreter, which is also the default user interface.

The implementation of the interpreter resides in a file called COMMAND.COM that has the role of being the first regular program to run after startup. By infecting COMMAND.COM, the virus thus obtains a guarantee of early execution on the system.

When an infected COMMAND.COM runs, the virus allocates new memory and modifies the DOS API (interrupt vectors 21h and 44h) properly.

Modifying the 44h interrupt vector to point to the previous 21h interrupt code is the way the virus uses to make direct service requests to the main DOS API.

However, modifying the interrupt vector 21h is the virus's way of managing the replication and destruction phase.

The virus hijacks the EXEC (4Bh) and FIND FIRST (4EH), services so any execution or search for a file in which a COMMAND.COM appears is susceptible to infection.

The destructive code of the virus is activated when it has managed to carry out 4 infections, overwriting part of the disk.

Infection of a COMMAND.COM file is carried out by overwriting the last bytes of the file and modifying its initial jump instruction to point to the virus code.

The Lehigh virus is considered a cavity virus because it overwrites the last bytes of COMMAND.COM, which often contain zeros. By taking advantage of this "cavity", the virus does not increase the length of COMMAND.COM.

The virus detects that a COMMAND.COM file has been previously infected, checking that the last two bytes are 0a9h and 65h

Virus design and known bugs

The original implementation of the virus has different bugs. These were identified and are documented in the references above.

The current virus cannot detect whether the virus has been previously installed by a previous copy. This causes the virus to reinstall itself in a resident manner with each execution of an infected file. This is a bad thing from memory consumption point of view but also from system stability.

The previous error can be solved by modifying the 21h interrupt handler so that the virus itself can be interrogated by looking for a previous installation in memory.

The memory allocation made by the virus belongs to the process that hosts it. When the host process ends, memory will be deallocated but interrupt vectors will continue to point to that memory, which may become invalid at some point in the future. This error is reproducible, for example, by executing an infected COMMAND.COM multiple times and then exiting them.

Marking the memory requested by the virus as I/O system memory also prevents memory from being deallocated when the virus host terminates.

Other errors that were identified but that could be considered as "absence of features" are related to the stealth capacity of the virus itself.

When the virus infects a file, it changes its timestamp and can reveal its existence on the system. It is possible to add code to preserve the original timestamp.

The virus does not manage the attributes of the file it is trying to infect, so a proper set of permissions on a file can prevent its infection. It is possible adding code to bypass and preserve the original attributes of the file.

The virus does not handle I/O errors, so if it tries to infect a file contained on a physically protected floppy, the user will find that the program is trying to write to it. It is possible writing a new I/O handler that is active before a file is infected and suppress this behavior.

Finally, the address flag can be cleared before using the MOVSB instruction to ensure proper operation.

The secret of the cavity

In relation to the Lehigh virus there is an interesting question related to the cavity or space that it uses to be stored in the infected COMMAND.COM file.

This space is identified in early analysis and public discussion forums as stack space; and subsequently in successive reports, articles and books written over the years, it is always referred to as stack space initialized to zeros and used as a stack structure in COMMAND.COM

If the latter is the case, two issues are striking:

  1. Why does COMMAND.COM have this stack space initialized to zeros and placed at the end of the file when the stack could be dynamically configured saving that disk space?

  2. Why does COMMAND.COM need such a large stack space?

I think both questions are related and, in my opinion, they find an answer in the design of DOS and in the compromises that the operating system makes in relation to memory management.

If we check the COMMAND.COM file at a low level, we can see that the space at the end of the file is used as stack space. There are references from the transient code part of COMMAND.COM to this last part known as transient space. The transient space contains transient uninitialized data.

This can be verified by reviewing the source code that Microsoft published in 2014 and that is available here. The parts to consider would be the file where it is defined and that can be found here, the reference to the stack space by the transient code found here, and the definition of the stack found here.

As you can see, the stack space allocation is static and will be initialized to zeros to make sure the linker is not fooled.

This static allocation appears as a necessary requirement to ensure that in any situation COMMAND.COM will have enough memory when its transient part takes over. In other words, there is a trade-off between using a few kb on disk and performing more complex memory management at run time.

On the other hand, it can be verified that all this space initialized to zeros does not correspond to the stack space only as you can see here.

The stack space defined in DOS 2.0 is only 80h bytes, while the rest of the space corresponds to variables within the transient space related to scan buffers, console buffers, etc.


While analyzing and studying the viral code, it is easy to see that some of the flaws in the virus are significant enough to affect the behavior of the operating system under certain conditions.

In these scenarios the system becomes highly unstable, even crashing. Therefore, it is not possible to study the virus in an agile way.

To solve the above, it is possible to implement patches for all the problems that were identified in the previous point. Thus, a new version of the virus, with the new logic added, becomes completely stable and can be used in an automated process with different versions of the operating system.

Although this makes it possible to evaluate the virus cycle (infection, replication and destruction) in a reliable and controlled environment, I found it interesting to also include integration and deployment aspects in the process.

This evolved approach leads us to naturally consider some of the current CI/CD platforms together with the jargon of the domain (pipeline, project, jobs, etc)

In this point, the Gitlab project was a good fit for automating these experiments with the virus. Gitlab is a complete DevOps platform that supports continuous integration, continuous delivery and continuous deployment.

The mapping of these more generic stages of the process to a continuous cycle of experiment is straightforward.

Our basic Gitlab pipeline contains 4 stages and 20 jobs that are triggered automatically with commits on the master branch.

Through pipelines, the complete (or partial) number of patches are continuously integrated with the intention of building an infected binary on which we verify its execution and behavior for different versions of the operating system.

The configured GitLab Runner uses the Docker executor to run jobs on provided images. The Docker executor connects to Docker engine and runs each build in a separate and isolated container using a predefined image. That way we can have a simple and reproducible build/test environment.

The default image extends Ubuntu 20.04 LTS Focal Fossa with the necessary tools installed to emulate DOS. The emulator used is QEMU which allows emulating the original DOS images extracted from floppys.

The image also has installed all those dependencies that do not come by default in the base image and that are used in the execution of the pipeline.

The four stages of the pipeline are 'build', 'infect', 'test' and 'integrate'.

In the first stage ('build') an infected binary is built that will act as 'patient zero'. This 'patient zero' is not an genuine infected COMMAND.COM file. The 'patient zero' simulates a host that is aware of its infection in order to report results that depend on the compilation and linking processes, such as the final size of the virus body. The 'patient zero' does not suffer any infection, a minimal infected file is generated directly from the source code of the new modified virus.

In the second stage ('infect') the 'patient zero' is executed for specific DOS versions. The source of the infection may be the hard drive or it may be a floppy disk. In early versions of DOS it was common for the system to boot from floppy. At this stage, information is obtained before and after the infection about the file and the environment.

In the third stage ('test'), all the required tests are executed on the information obtained in the second stage. The output obtained is verified as expected and new information is generated from analysis tools.

The fourth and final stage ('integrate') performs integration tasks and compilation of results. All relevant information and artifacts of interest generated in the previous stages flow through the pipeline to be managed, consolidated and packaged at this stage. The result is a single packed file that contains information and binaries from the multiple tests and tools used in each of the differential analyzes.

Pipelines time reporting

Pipelines ran on modest hardware from 2016 on which Ubuntu 20.04 LTS Focal Fossa was installed without any special optimization. Options and default settings. Hardware details:

  • Intel (R) Core (TM) i7-6600U CPU @ 2.60GHz
  • 16 GB RAM
  • 512 GB SSD

The average execution times of the entire pipeline for the last 50 executions with three jobs running in parallel:

  • Build 00:12 seconds
  • Infect 00:54 seconds
  • Test 00:59 seconds
  • Integrate 00:07 seconds

Total 02:12 seconds

Wrapping up

The Lehigh virus is probably one of the most underrated computer viruses in recent history for these infectious agents. The extremely simple infection logic and code of only a few bytes hides the potential for a well-designed but not very brilliantly implemented virus.

Historically, the virus is part of a triad of pioneers along with the Brain and Jerusalem viruses. Its rapid manifestation and a series of very notable bugs allowed it to be quickly identified, preventing it from spreading rapidly and becoming a real problem at Lehigh University, the place where it originally appeared.

It is interesting that the references consulted refer to the fact that COMMAND.COM infection is not something that a virus of the time should do if it wanted to remain hidden and go unnoticed. Also note that these observations may be unfortunate in that they do not contemplate that a higher quality implementation would not have allowed the virus to be so easily exposed.

The infection of a single key file such as COMMAND.COM, without increasing its size and with such a simple and direct implementation, allows the size of the virus to be very reduced. This makes the virus a small, compact, and very fast-running program for its time.

Most of the analyzes consulted also focused on the limitations of the virus and its low infectivity, overlooking the virus's approach of identifying and abusing a "cavity" in COMMAND.COM that would allow it to lodge itself.

In an attempt to further explore the impact of using this cavity originally identified as a static stack space, patches and fixes were added for the identified bugs that allowed a new and stable version of the virus to run automatically on a DevOps platform.

Among the results of the experiments carried out for this new implementation of the virus, it is shown that the original approach of the virus to modify the displacement of the jump instruction for the takeover, as well as to use the cavity for its accommodation, would have been compatible in the layout with a large number of future instances of COMMAND.COM

However, explicitly forcing an unconditional jump and running a dynamic cavity search would probably have been a better strategy.

For the different tests and experiments, using limited and non-optimized hardware, the execution took a couple of minutes from pushing a new change to obtaining a deliverable with all the information integrated and packaged for nine versions of DOS.

Another aspect to take into account is the fact that although some of the original errors contained in the viral code required knowledge of DOS and its internal structures, other errors that are located in the code do not have a clear technical justification. These latter errors are easy to detect/fix and are in contrast to other well-groomed and optimized parts of the virus.

The fact that the virus contained such an obvious series of bugs and such a low infection count before manifesting itself destructively suggests that the spread of the virus was not among the main goals of the author.

In a way, the obvious limitations probably known to the author in the implementation lead us to think that the aim of the virus was not to survive beyond the version of DOS for which it was programmed.

Fixing some of these simple bugs but keeping the assumptions of the original infection logic would have been enough to make the virus a much more infectious and portable program running in future versions of DOS, as tests and experiments suggest.


comments powered by Disqus

Recent Entries