The evolution of what are widely considered as modern graphics processor units (GPUs) began with the introduction of the first 3D add-in cards back in 1995. This was followed by widespread adoption of 32-bit operating systems and affordable personal computers (PCs). The graphics industry that existed before this point was dependent on more prosaic 2D, non-PC architectures with large price tags. 3D and virtualization PC graphics eventually took over computing gaming, as well as military simulators and medical imaging. Today, graphics display reproduction is used in vast array of applications from industrial to education.

Let’s look at how graphics architectures have changed over the years.

The 60’s saw architectures based on specific display list instructions that drew lines and objects using specific processor units. This was very resource heavy of course.

The 70’s brought the scan convert architecture which translated the incoming data stream into bitmapped video. This lay the foundation for 3D graphics. One of the key examples of a product using this type of architecture was the Atari 2600, released in 1977, with generation of screen display and sound effects as well as reading of input controllers.

In the 80’s, frame buffer architectures had arrived. One example is the ATI Colour Emulation card – incorporated into Commodore computers. The card boasted 16kBytes of memory for outputting monochrome green, amber or white text against a black background.

In the 1990’s, there was explosion of algorithmic graphics processing. Graphic accelerator cards started to appear. In 1992, Silicon Graphics released OpenGL 1.0 and DirectX 1.0 from Microsoft was introduced 3 years later.

By 2000 complex graphics processing was proliferating into the handheld consumer and mobile phone markets. Nokia introduced the first mobile phone hybrid – the N-Gage – In 2003. Sony’s Playstation Portable was developed the same year. With the advent of OpenGL and DirectX, GPUs added programmable graphics processing. Each pixel could be processed – adding shaping or texture to make the object look shiny, dull or rough.

Let’s consider a typical graphics controller solution. The host interface is normally a high pin count parallel bus that can provide a large data throughput capacity to sustain high graphics data. As there is sharing of graphics data, the memory controller provides memory management whether the physical memory is on or off chip. The core of the GPU efficiently manipulates and processes graphical data. Traditionally a large frame buffer memory holds complete frames of bit-mapped images for display via the display controller.

It is now, however, possible to implement a complete parallel processing mechanism for graphics, touch and audio onto a single chip. FTDI Chip’s Embedded Video Engine (EVE) objected-oriented technology is at the vanguard of this new human machine interface (HMI) design methodology.

Here the serial bus is considered as the host interface. This helps reduce the pin count and pushes packaging costs down. Having a serial bus interface also means that data transfer throughput requirements between the microcontroller and graphics chip is not too high. A simple and low performance microcontroller can be utilised as a result.

Display list memory, which stores primitives, has replaced the large graphics memory. Higher level graphics widgets, like progress bars, clocks, keys, gauges, text displays, sliders and gradients, can also be implemented as display lists. This give designers an expansive array of objects for graphic user interface (HMI) implementation that they can benefit from. In order to ensure seamless transition between the microcontroller updating the display list and display controller refreshing the panel, a ping-pong mechanism is used to carry out updates on screen through a register command.

The graphics controller element executes the primitive objects, like lines, points, rectangles, bitmaps, text and graphs, as well as operations like stencil, alpha blending, masking and anti-aliasing to provide rich graphics generation without  microcontroller involvement. These intensive processing tasks are no longer dependent on microcontroller and the system design can focus on creating rich graphics. At the same time, the memory requirement on the platform side has reduce significantly.

The display list is executed once by the graphics controller element for every horizontal line. It processes the primitive objects in the display list and constructs the display line buffer. The horizontal pixel content in the line buffer is updated if the object is visible at the horizontal line. Hence, this architecture avoids the need for a large frame buffer memory, with a much smaller memory footprint thus being sufficient. An integrated touch controller element enables touch feedback experience to be derived. Resistive and capacitive touch are the common solutions currently. These provide sensor parameters in terms of XY co-ordinates back to the microcontroller when a touch event occurs.

Instead of the traditional scanning for touch and responding with XY co-ordinates, an intelligent touch tagging system can be employed by EVE. An invisible tag can be assigned to a button or key. When a touch event occurs within the tag area, an associated tag number instead of XY co-ordinates will feedback to the microcontroller – saving its processing resources for other tasks.

FTDI Chip’s EVE technology has taken a far more sophisticated approach to HMI implementation. It allows advanced HMIs with greater functionality to be developed while keeping within budgetary and time to market constraints.