lynx-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: LYNX-DEV C++/C mix


From: Michael Sokolov
Subject: Re: LYNX-DEV C++/C mix
Date: Mon, 18 Aug 1997 12:03:38 -0400 (EDT)

   Mixing C and C++ is easy. Tom's suggestion (an intermediate layer
written in C++ with Lynx-accessible functions declared with name mangling
disabled) would work, but depending on how the C++ library is written, an
easier way could be to disable name mangling in the C++ library itself on
the functions that are to be called from the outside (add extern "C" to
their declarations).
   But how will this help you create Lynx for Windows v3.xx? Will it run
under vanilla Windows v3.xx or will it require Win32s? In the next few
paragraphs, I'll explain some of the problems you will encounter.
   Unlike most Lynx Developers who come from the world of UNIX and high-
level languages, I come from the world of Microsoft DOS/OS2/Windows and
have been programming in assembly language for about 10 years. I have done
extensive research on the internals of Microsoft's operating systems, which
includes disassembling and tracing Microsoft's code, talking to the
legendary Microsofties who wrote it and even obtaining some of their
internal docs. As a result, whenever the Lynx Development Team touches the
world of Microsoft OSes, I can give you a lot of help. Let's start with an
overview of Windows internals and finish with their effect on Lynx.
Caution: the overview is long, about two hardcopy (U.S. Letter) pages.
However, if you wish some help with Lynx for Microsoft OSes, please be
patient and read through.
   Windows v3.xx running in 386 Enhanced mode consists of two major
subsystems: the VMM/VxD subsystem and the GUI subsystem. The VMM/VxD
subsystem knows nothing about Windows GUI. Instead, it provides a special
interface called DOS Protected Mode Interface (DPMI) which allows DOS
applications to run in protected mode and use the power of advanced CPUs.
Indeed, it deserves to be bundled with DOS, rather than with Windows,
creating Protected Mode DOS (but that's beyond the scope of Lynx-Dev). On
the other hand, the GUI subsystem is nothing more (and nothing less) than a
Protected Mode DOS (or DPMI) application running under the control of the
VMM/VxD subsystem. The fact that it comes on the same disk as the VMM/VxD
subsystem doesn't give it any special privileges, it's still nothing but a
DPMI application, no different from any other DPMI applications.
   However, there is more to it. The VMM/VxD subsystem starts running on
one physical machine with one physical address space, one physical display,
one physical keyboard, and one physical mouse running one copy of DOS. Once
it's up and running, it produces multiple virtual machines (VMs). Each VM
emulates a separate computer. Each has its own instance of DOS current
state (e.g., its own notion of the current drive and the current
directory). Multiple DOS applications can run simultaneously in different
VMs, since each VM has its own virtual address space and its own virtual
display, keyboard, and mouse. The virtualization is so complete that an
application running in a VM can switch its virtual display between text and
graphics modes and do so by using IN and OUT instructions to the addresses
normally belonging to a physical video adapter: the VMM/VxD subsystem will
trap those addresses and virtualize them.
   Now it's time to ask, "How does that work in Windows v3.xx?" In Windows
v3.xx, each DOS box is a VM (you have guessed that already, right?).
However, all Windows GUI applications run within the GUI subsystem of
Windows v3.xx. And where does that subsystem run? In a special VM called
the System VM. All Windows GUI applications, system-wide DLLs, and the GUI
subsystem's kernel itself run in one VM: the System VM. The System VM's
virtual display is managed by special DLLs in the GUI subsystem, such as
DISPLAY.DRV, USER.EXE, and GDI.EXE (these are all DLLs, even though they
have different extensions). Since Windows GUI applications don't have their
own VMs, they don't have their own virtual displays. They can run in
windows and not overwrite each other's output only because they
deliberately agree to work only through USER and GDI, and if a Windows
application tries to write directly to the video adapter, it will overwrite
the Windows screen. DOS boxes, on the other hand, have their own VMs, and
no matter how uncooperative they are, they can't affect each other. A
sharp-eyed reader can ask, "How can a DOS box run in a window then?" That
is actually an example of inter-VM communication. Since VMs are isolated
from each other, such communication can occur only with explicit sanction
of the VMM/VxD subsystem that manages the VMs. One detail that I have
omitted so far is that the VMM/VxD subsystem isn't monolithic either. It
consists of the Virtual Machine Manager (VMM) and a collection of Virtual
"x" Devices (VxDs). This system is designed to be extendable: third-party
programmers can write their own VxDs by using special tools available as
part of the Windows Device Development Kit (DDK). One VxD that comes
standard with Windows is WSHELL. This VxD allows the GUI subsystem running
in the System VM to supervise the DOS boxes running in separate VMs. One of
WSHELL's functions is the piping of user I/O (display, keyboard, and mouse)
between VMs. When a DOS box is running in a window, WSHELL captures the
contents of its VM's virtual display and pipes it into the System VM. A
special Windows application running in the System VM talks to WSHELL
through a special API, picks up the other VM's output and displays it in a
GUI window.
   This is the story with Windows v3.xx. How about Win32s and Windows 95?
Well, Microsoft says that Windows v3.xx is 16-bit and Windows 95 is 32-bit.
Let's examine this claim. First of all, all of these systems have the plain
real-mode DOS underneath them, even Windows 95 (it's just hidden and
obfuscated). Real-mode DOS is obviously 16-bit. Then comes the VMM/VxD
subsystem. This baby is inherently 32-bit in all versions, even Windows
v3.00. Win32s adds one VxD to it, and Windows 95 adds a few more and
modifies the existing ones a little, but the differences are still minor.
The real differences begin in the GUI subsystem. In vanilla Windows v3.xx
it's all 16-bit. In Win32s and Windows 95 it is further subdivided. On one
side of the fence there is 16-bit code, on the other side there is 32-bit
code. The fence is actually pretty high: each side has to have its own
DLLs. An application on one side of the fence can't call DLLs from the
other side of the fence transparently for certain inherent reasons (namely,
the return instruction in the called function can't return to the other
side of the fence). Each common DLL (like WINSOCK, for instance) has to
have two versions, one on each side of the fence. Usually, only one of them
actually does the work, while the other calls it across the fence (although
such calling cannot be done transparently in applications, it can be done
non-transparently, i.e., with special coding on both sides). This is what
Win32s and Windows 95 do with USER, GDI, and some KERNEL functionality.
   The final note before getting back to Lynx regards Win32 console
applications. Before Win32, there was a one-to-one correspondence ("If you
want the rainbow, you have to put up with the rain") between running in the
System VM (and therefore having Windows DLLs available) and using the
Windows GUI. Win32 introduces a special class of applications that run in
the System VM and enjoy access to all Windows DLLs but look like DOS boxes.
In Win32s and Windows 95, this feat is implemented through inter-VM
communication. The Win32 console application runs in the System VM but its
user I/O goes through a separate DOS box-like VM. A dummy program called
CONAGENT.EXE (Console Agent) runs in that VM and communicates with a
special VxD, which communicates with the console application in the System
VM. This is very similar to a DOS box running in a window, except that it
works in the opposite direction (the output goes from the System VM to the
separate VM, and not the other way around). A sharp-eyed reader should ask,
"What if you run a Win32 console app in a window? Then both the code
execution and the user I/O are in the System VM, right?" Not so simple. In
this case, the output goes from the Win32 console application in the System
VM to the console agent in a separate VM in the form of ASCII codes, there
it gets output to the console VM's virtual video adapter, which converts it
from ASCII codes to the graphical image, and then it goes back to the
System VM in the form of graphical image, where it gets displayed through
the GUI APIs. Whew!
   If you have been patient and read through to here, take a deep breath:
I'm getting back to Lynx! If you want Lynx to run under Windows v3.xx, you
can take three roads:
   1. Compile the Lynx source code with a Win16 compiler with the same
options you currently use for the Win32 version (that is, use WINSOCK for
TCP/IP access. This approach would make a classical Windows v3.xx network
application that would run under vanilla Windows v3.xx with any WINSOCK
implementation and wouldn't require Win32s. It would run in the System VM
and access the 16-bit WINSOCK.DLL in a straightforward way. However, there
is a problem. Lynx is a character-mode application and needs a text-mode
display. The Win32 and Protected-Mode DOS versions have their own VMs which
provide virtual displays which are transparent all the way to the hardware
level. A Win16 version would run in the System VM and won't have a console
VM like Win32 apps do, so it would need a curses implementation that
translates curses calls into Windows GUI (USER and GDI) calls. I don't know
if such an implementation exists. Also I don't know if the mainstream Lynx
code suffers from VAXism/UNIXism (the assumption that int is 32-bit) or
not. If it does, you will have problems compiling it with a 16-bit
compiler.
   2. Run the Win32 version of Lynx under Win32s. This will solve the
bitness and the text output problem, since Win32s creates console VMs for
console Win32 apps in the System VM and provides the necessary inter-VM
communication. However, because of the bitness fence, the Win32 Lynx won't
see the 16-bit WINSOCK, so you would need a Win32 WINSOCK implementation
that thunks down to the 16-bit WINSOCK the same way the 32-bit USER and GDI
thunk down to their 16-bit versions in Win32s and Windows 95. Again, I
don't know if there is one or no. Another downside is that you need Win32s.
   3. Run the Protected-Mode DOS version of Lynx in a DOS box. The problem
area in this case is the same as in the previous one: WINSOCK. A DOS box is
a separate VM, and accessing WINSOCK in the System VM requires inter-VM
communication and hence a special VxD. There is nothing that prevents you
from writing a VxD, but the skills and experience required for this task
normally restrict it to people who have grown up with Intel 80x86 assembly
language and the internals of DOS and related OSes on their lips (people
like me).
   BTW, your suggestion of making a WINSOCK to packet driver adapter is
completely impractical. The WINSOCK and packet driver interfaces are at
completely different levels. WINSOCK is a Berkeley socket-like high-level
interface to the TCP/IP stack, which lets you open and accept TCP
connections. On the other hand, the packet driver interface is a raw low-
level interface to an Ethernet network card that lets you send and receive
raw Ethernet frames. The whole TCP/IP stack lives right between the two
interfaces.
   Getting back to practical approaches, of the three above I would choose
the third, because I prefer the app to run in its own VM, rather than in
the System VM, since this gives it its own text-mode screen, and also in
this case it would interact directly with the Protected Mode DOS, rather
than go through the Win32 kernel. Being one of the few remaining assembly
language DOS-and-related-OSes programmers, I am able to write the necessary
VxD. However, before jumping ahead and writing the VxD, I need to resolve
one problem: the interface between the VxD and the DOS network application
(Lynx in this case). Defining this interface takes me back to the issue I
have considered a while ago, which I will describe in the next paragraph.
   I have always been thinking that the approach taken by Lynx for DOS (the
DGJPP port, DosLynx, and Bobcat), namely, incorporating the Waterloo TCP/IP
stack inside the app is incorrect. The approach taken in UNIX and Windows
(WINSOCK), namely, having a system-wide TCP/IP stack and a socket interface
to it is much better. The latter approach has been intended by the design
of TCP/IP itself and even the OSI reference model. I believe that such an
approach should be possible under DOS. I plan to write the DOS Socket
Interface specification (DOSSOCK?) similar to Berkeley sockets and to
WINSOCK. When I do that, we will be able to make a version of Lynx that
will use this interface (that will be easy, since that version won't differ
much from the current DGJPP Waterloo and Win32 WINSOCK versions). Once we
do that, I'll write the VxD that will let this version run unmodified in a
Windows DOS box and use the WINSOCK in the System VM. In effect, this VxD
will be a WINSOCK-to-DOSSOCK adapter.
   To summarize: there is work to be done, but stay tuned! Thanks for
reading through!!!
   
   Sincerely,
   Michael Sokolov
   Phone: 216-646-1864
   ARPA Internet SMTP mail: address@hidden
;
; To UNSUBSCRIBE:  Send a mail message to address@hidden
;                  with "unsubscribe lynx-dev" (without the
;                  quotation marks) on a line by itself.
;

reply via email to

[Prev in Thread] Current Thread [Next in Thread]