We work in the Institute for Computer Science and Control, Hungarian Academy of Sciences, and we are developing an open source, scalable, distributed VR/AR engine called ApertusVR. Our research group has been founded about one and half year ago, in August 2016, but the predecessor research group conducted a ten-year research programme related to collaborative virtual reality technologies. As a result, our group checked and published results in a form of a software system called ApertusVR.
ApertusVR is a loosely coupled modular software system, that gives you the opportunity to create collaborative virtual spaces. You are able to create virtual reality scenes and share them with your colleagues. A good example is to create a meeting room, where you can communicate via text chat and voice chat independently from your geolocation. At first, hearing this may sound nothing new, but in order to do this till now, you had to choose a VR platform/framework, and use it to build your application with certain limitations of the chosen framework. This choice also influenced the hardware you had to use. With ApertusVR these limitations simply disappear.
ApertusVR creates a new abstraction layer over the hardware vendors in order to rapidly integrate the virtual and augmented reality technologies into your developments, products.
Read more: Architecture of ApertusVR
Our business logic
Our Institute for Computer Science and Control is among the prominent computer science research institutes of Europe. The results of the last decade made it clear, that although our institute has numerous financially successful research projects, we need to be open to the market, and start pre-competitive researches.
One of these researches is ApertusVR. As it is an open source toolkit, we are searching for B2B partners. We deliver R+D services to our partners so they can enjoy the advances in VR/AR technologies. Although we are a research institute, we leave open the opportunity to found start-up enterprises in order to attract investors if they see the market potential in ApertusVR and would like to develop jointly a future B2C product. We’re also looking for partners who could supply the hardware for our future-clients, which means that we intend to suggest the best hardware for the AR/VR/AR solution we deliver in our development. We cannot be official resellers, but in the case your product meets the requirements of our client, we would gladly recommend it to them.
Projects we are working on
The main goal of ApertusMED is to allow real-time visualization of radiological examinations and to match the virtual examination data on the patient’s physical and virtual body. The matching process will be performed in an Augmented Reality scene allowing the viewer to examine the data in the physical space using his/her motions to move around the virtual organs, build upon using examination data.
ApertusMED could help in medical education -more specific- in surgical planning, where the participants can hold a consultation in a shared virtual and/or augmented reality scene, independent from geolocation. The other goal of our tender is to complete a mobile presentation, that is suitable to demonstrate the capabilities and research results of ApertusMED, and to raise the numbers of case studies related to ApertusVR programmers library.
Within the confines of the Symbio-tic project, we participate in a common project with the Research Laboratory on Engineering and Management Intelligence. In this common project, we have the task to virtualize the physically built production environment in order to visualize real-time data, so a person can take control of the production environment independently of his/her geolocation. This virtualized production environment serves as a sandbox for experimenting and fine-tuning of production procedures, and a space for consulting with colleagues. All of the participants (humans, software and hardware) of the factory communicate and share knowledge on the highest cognitive level – the level of the visualization.
This application enables the doctors to participate in medical examination independently form space with the help of shared virtual reality. With this application, doctors who participate in the examination can draw, annotate the examination material and append other documents. Streaming the examination with a device we can create a virtual canvas that matches the field of view of the examination device’s camera, so doctors can have the sense of being inside the patient.
Visual stimulus for grid cell research in human navigation
We developed an application using ApertusVR for the Texas University, Austin. In this research the patient has to chase bubbles with a certain time limit. This way the application produces a constant visual stimulus, while the patient has to design a route to chase as many bubbles as possible. This task is monitored using clinical EEG devices and processed for further studies. The application runs on tablet and Oculus Rift with the same codebase, using the necessary plugins.
The scenario in this research programme was to create a multiplayer environment, where the participants had to solve a problem together. The participants were in different geolocations and they have been monitored with an EEG device. The EEG data have been processed with big data solutions for further studies. Our task was to deliver the IT solution to this research project.
Our CAVE-like 3D Immersive Virtual Reality Display has three special walls that allow images to be projected on them, surrounding the people using it and creating the illusion of being inside the virtual environment. The walls of the VR Display are made up of rear-projection screens, and high-end projectors display images on each of the screens. The user inside the VR Display wears special glasses and can see 3D images of objects in the virtual scene projected on the walls. The proper view of what the object would look like when the user walks around it is controlled by his or her physical movements using a head tracking sensor. People using the VR Display can actually see objects floating in the air, can walk around them, and can look inside them.
Imagine a completely new form of relationship with your IT devices: a relationship in which the mouse and the keyboard seem like ancient relics used only by geeks and IT professionals; a relationship in which your 'desktop' is much more closely linked with your physical reality, and in which social interactions play a central and practical role.
This is our view for the not too distant future: A technological revolution is coming that is poised to change our life in a way similar to how the 2D graphical interface trumped the character-based command line just a few decades ago. VirCA is aimed at implementing this vision by adopting the shareable and fully customizable 3D virtual workspace as a central idea. VirCA enables people who are not necessarily in the same location, or even on the same continent, to create ideas, then design and implement them together in a shared virtual space. VirCA is a pilot solution which highlights several key tenets of the EU trend of Future Internet, and as such provides very effective means of collaboration in virtual spaces.
The arguably novel philosophy behind the platform was recognized through highly ranked awards, such as the exhibitor award at the FET'11 international forum, and the 2013 TÜV Rheinland Innovation Award.
Due to the capabilities of the VirCA platform to serve cooperative activity of large-scale international consortium, we have received EU funds to coordinate research projects in various topics such a cognitive neuroscience and dynamic robot behaviour.
Visionair is an acronym for "VISION Advanced Infrastructure for Research".
Visionair calls for the creation of a European infrastructure for high level visualisation facilities that are open to research communities across Europe and around the world. By integrating existing facilities, Visionair aims to create a world-class research infrastructure for conducting state-of-the-art research in visualisation, thus significantly enhancing the attractiveness and visibility of the European Research Area (ERA).
With over 20 members across Europe participating, VISIONAIR offers facilities for Virtual Reality, Scientific Visualisation, Ultra High Definition, Augmented Reality and Virtual Services.
The HUNOROB (Hungarian-Norwegian research based innovation for development of new, environmental friendly, competitive robot technology for selected target groups) project has goals of academic basic research, R&D projects, and product level innovation. Due to its nature the basic research related to the project has general goals. Over the general scientific progress the most important expected achievement of the project is that it aggregates a wide range of academic research and related R&D activities in a few working packages with goals beneficial for the society. The working packages are rather related to R&D activities, and they integrate the knowledge accumulated by scientific research. Their output can be evaluated from the aspect of innovation.
Telemanipulation is a process where a user's activity is extended to a dangerous or inaccessible environment without his or her personal presence. The extension is performed by a master-slave system, with the master device representing the distant environment to the user, and the slave device representing the user to the distant environment. In the classical concept, early telemanipulation systems were simply the elongations of human arms, where muscles were later replaced by external energy sources and artificial actuators, but the control remained the task of the human operator, in a point-to-point manner.