Visualising & Augmented Reality
Réalité virtuelle

Visualising & Augmented Reality

Astek: facilitating technological progress

The aeronautics, space and defence, film and gaming industries are the main promoters of virtual and augmented reality. No matter the precise use: immersing an individual in an environment far removed from the real world and only accessible via virtual reality, immersing someone in an environment similar to reality but into a situation which is fantastical, or immersing a person in their own environment so they can model it, soon all business sectors will be investing in the field.

While virtual reality means giving someone a sensory experience to simulate them being physically present in a computer-generated artificial environment, augmented reality refers to a sensory experience to simulate computer-generated virtual elements with which they can interact being added to their actual environment.

 

Astek has always played an active part in developing these technologies and contributing to advances in computer graphics, image processing and artificial intelligence

3D computer graphics

An increase in computing power, the reduction in associated costs and the mass production of 3D graphics cards allowing “real-time” 3D rendering have made it easier to create so-called “synthetic” images based on 3D models since the beginning of the 1990s. Our experts are involved in the following areas:

  • Computer Aided Design (creating pieces of equipment, vehicles, buildings, structures, etc.)
  • Learning and crisis management simulators (preparing for medical, military and aeronautical interventions)
  • Entertainment (film industry and video games)
  • 3D printing (design, visualisation or verification of the object to be printed before production)

From a technical point of view, and in particular in real-time, 3D computer graphics use “optical” concepts: the main parameters concern the number of objects present in the environment, the accuracy of models, perceived depth of field, transparencies and reflections, lighting and shadows, complexity of movement, etc.

Digital image processing

Image processing is used in many areas (geographical and weather maps, surveillance, industrial control, fake detection, microscopy, aeronautics and space, etc.) and covers the entire electromagnetic spectrum, especially in areas like medicine (ultrasound, scanners, MRI, etc.) and astronomy. Our expertise lies in the following areas:

  • Recognising known elements (text characters, objects, images)
  • Video analysis (movement, tracking, sides of an object, etc.)
  • Person recognition (biometrics, behavioural analysis, etc.)

Artificial Intelligence and Data Science

In addition to our imaging skills, we can offer experience in artificial intelligence (continuous learning via software) and big data (comparison with existing databases) to use augmented reality in other ways.

  • Optimising human figures added to virtual worlds by quickly moving from an initial coded virtual model to a more realistic reproduction (e.g. for studying human behaviours such as panicked movement)
  • Ensuring better understanding of a situation by object recognition software (e.g. a driverless car differentiating between a person and a road sign)
  • Deciding on information, objects or events to add to an environment or monitor based on knowledge of the environment and user actions (e.g. maintenance support application to direct the technician to the breakdown location, give them the necessary instructions and explanations, check the repair has worked, etc.)

Typical use of virtual reality: the simulator

In a professional context, virtual reality is mainly used for training or testing by giving people practical experience. People can be trained to use a tool, machine or vehicle to perform a precise operation – potentially in a hostile or unknown environment or in the face of an unusual or particularly serious event/accident. As well as being used in training, virtual reality simulation can help people develop quick reactions in or become aware of the seriousness of certain situations.

There are two stages in the lifecycle of a virtual reality training tool, each with its own merits:

  • The “mock-up”: allows training to be carried out even if the tool, machine or vehicle it is copying hasn’t yet been produced. This ensures the presence of competent users right from the time when the item is eventually developed and, if necessary, the object’s design can also be modified if user training shows up any sub-optimal aspects
  • The “simulator” itself: only when the actual item has been produced can its actual benefits, constraints and defects be assessed and reproduced in virtual reality. The simulator is then able to train users in breakdowns and incidents they may encounter when using the finished product

Usage type de la réalité augmentée : l’aide à la maintenance

From a technical point of view, augmented reality equipment (glasses, mobile phones, etc.) has several functions:

  • Taking in (using cameras), digitising and refining the “real” scene as seen by the person
  • Interpreting all this to catalogue the items or people present
  • Recognising objects, people and situations
  • Deciding on the virtual objects and information to include in the scene depending on the objective linked to the selected augmented reality application
  • 3D computer graphics to insert or superimpose the virtual elements

Maintenance support is perhaps one of the fastest growing areas for augmented reality. Depending on the complexity of the software involved, there are several different uses:

  • On user request when a user is looking at a particular part or situation (breakdown, wear, vibration, entering an area requiring special precautions, etc.), the software analyses it, recognises it and displays the relevant maintenance instructions (possibly including the part’s own maintenance schedule)
  • The software uses its own initiative to analyse the user’s location and entire field of view, recognise specific items or situations and make appropriate recommendations
  • The software monitors the repair and issues new recommendations as needed
  • The software connects to a remote technician who sees the situation just as the user is seeing it (which allows the former to take location-specific constraints into account) and can display the information needed for repair directly in the user’s environment (e.g. by highlighting a screw to unscrew, directing the technician to the right tool to use, etc.) and check the repair is progressing properly in real-time

Projects

3D vision for an armoured vehicle simulator

Astek made various components for 3D tactical terrain monitoring software integrated into an armoured vehicle training simulator.

These simulators recreate a virtual battlefield on which training or study simulations are conducted via cameras controlled using a joystick or attached to someone involved in the tactical scenario.

  • Software design and development
  • Testing and integration into the final platform
  • C++/C# and distributed architecture

Infrared image for airborne optronic system

Astek created infrared and near-infrared image processing software for a French company working in the offensive defence sector.

Part of an airborne optronic system, the software was able to conduct air/ground attacks with laser-guided weapons.

  • Software design and development
  • Testing and integration into the final platform
  • C++, Linux ELinOS, UML, Rhapsody, MyCCM, CORBA, Rack-CORS, Mercurial and ClearCase

Augmented reality and navigation in poor visibility conditions

Astek created software for an on-board EVS-H real-time image processing computer used to improve helicopter pilot situational awareness in poor visibility conditions. The computer then had to be connected to an augmented reality helmet.

The aim was to allow landing on a partially hidden area (e.g. caused by a burning oil platform, foggy weather or a snowstorm).

Cameras film the ground and the software “removes” as much as possible of the smoke, mist and snow and feeds a clearer image of the ground to the helmet.

  • Architecture
  • Development
  • Technologies used: C++, Linux ELinOS, UML, Rhapsody 7.4 and ClearCase UCM
Footer