FREN

Technological context of mobile screens

The advent of smartphones and tablets in the latter half of the 2000s led to their widespread adoption. These mobile screens, the hybridization of mobile phones and computers, have become indispensable and are now everywhere in our daily lives. As a form of computer adapted to situations of mobility, the novelty of these digital machines diminished as they gradually blended into the contemporary technological landscape: they are now an integral part of the digital technical ensemble that characterizes our era.

That ubiquitousness does not, however, correspond to a standardization that would make the devices interchangeable. Rather, it results from a need for interoperability among the various device formats with which we surround ourselves. Indeed, mobile screens present unique technical features that need to be explored beyond their preconceived uses. The field of design and the artistic practices of interactivity are particularly conducive to such exploration. Because mobile screens can obtain information about their own physical state (orientation, angle, etc.), they negate the need for exterior control interfaces like the mouse or keyboard. Through the many sensors they contain, mobile screens allow for interaction with the media they display (image, text, video, etc.) via their manipulation: they are their own control device. For a time, access to the exploratory and artistic use of these technical specificities proved difficult, as creating interactive works on mobile screens required mastery of a software development toolchain designed by IT specialists. But that situation is now changing.

Because they are technical objects facilitating the creation of a “mobile internet”, mobile screens include a web browser, a software element we find on a growing number of digital machines (computers, game consoles, smart TV, etc.). The ubiquity of social networking services, the increasingly common use of the Internet, and the use of interactive techniques for creating web documents contribute to the evolution of web technologies, the browser progressively becoming a platform in and of itself, like the major operating systems, and even showing a tendency to replace them (Chrome OS, Firefox OS). More than ever, creating for the web platform now makes it possible to distribute one’s creative work on a wide variety of devices, as long as they are equipped with a browser of some kind. A virtual machine that is theoretically universal—a standardized and easily accessible medium for consultation and software creation—the browser is one of the vectors of interoperability among different digital machines: with a bit of effort put into developing an automatic system to adapt to the hardware format, it is now possible to produce works that can take advantage of the capabilities of mobile screens while being executable on other kinds of devices.

An authoring tool for what kind of mobility?

So, if using dedicated design tools to explore the mobility of mobile screens was an important goal—one that motivated the forging of the Mobilizing programming language for artists and designers eager to use mobile screens and their specificities—it now seems essential to encourage a new form of mobility: the mobility of interactive works across the wide variety of digital media. It is a question of endowing interactive creations with the ability to run themselves and let themselves be experimented upon by the public on different devices while adapting to the technical features of those devices.

That is the mobility we now want to encourage through the design and development of Mobilizing.js. A software library and programming interface based on the JavaScript interpreted language, used in particular to program the behavior of elements contained in web pages, Mobilizing.js is a programming-based authoring environment for graphic and visual design. The choice of JavaScript is based in part on recent developments in the status of the web browser, as well as on the proliferation of JavaScript execution contexts that now allow access to the native layers of host platforms. On mobile screens, for example, when a hardware element cannot be reached and controlled by the JavaScript context, it is possible to build bridges between that interpretation context (JavaScriptCore, V8, Rhino, etc.) and the lower software layers (native SDK) with direct access to the operating system and the hardware. That possibility of broadening the capacities of JavaScript interpreters is a key element in this new form of mobility we are trying to bring about with Mobilizing.js. Also, because JavaScript is interpreted, it is also conducive to forging high-level alternative programming languages, whether they be visual, textual, experimental, or indeed even esoteric. Mobilizing.js is therefore an evolving software authoring environment that seeks to meet the software and hardware needs of a new multi-faceted and composite mobility that in our opinion characterizes the current and future digital technological landscape, following upon the introduction of mobile screens.

Mobilizing.js ?

Mobilizing.js is a software authoring environment for artists and designers intended to encourage the creation of interactive works on various forms of device-screens. The great versatility of the increasingly well-established JavaScript makes it possible for Mobizling.js to expand its field of action to many software environments, including browsers, Node.js servers, or even the contexts specific to certain machines (mobile devices—tablets, smartphones—iOS, Android, Windows, etc.). The foundations of Mobilizing.js take shape as a JavaScript library that specifies a programming interface developed for interactive artistic creation in hardware and software environments that are now unified by the JavaScript language.

The core functionalities of Mobilizing.js center on graphic design, by providing a real-time 2D and 3D engine forged upon webGL (a standard based on OpenGL specifically defined for the web), which now seems to be settling durably into the landscape of contemporary computer graphics technologies. This choice, in addition to being a wager on the future of an emerging standard, is based on its now well-established availability on all mobile devices in the specialized graphics processor (GPU) market, which opens up new possibilities for real-time implementation. When we want to use those GPUs with web technologies, the use of webGL becomes necessary, because, more and more often, it now has “fast-track” implementation. But it is also a question of being able to more easily imagine uses for the Mobilizing.js graphic engine on platforms other than those based on the Document Object Model (DOM) of browsers.

For if, due to HTML5 specifications, browsers tend to allow ever more access to the lower layers—even hardware—of the host machine on which they run, many obstacles to the implementation of the specificities of the devices (especially mobile ones) still constitute a barrier to artistic creation with web technologies: the time required for real implementation of HTML5 specifications by major software publishers that would make it possible to use the technical elements recently integrated into a device (sensor, etc) is disproportionately long relative to the need for experimentation in artistic practices working with interactivity. That is why, like Cordova, PhoneGap or Ejecta, Mobilizing.js wants to offer the possibility of using those hardware specificities by building bridges with other environments (WebView, whose JavaScript context extends to the OS via objective-C on iOS or Java on Android). So, when a programming interface doesn’t already exist in the web standards, the JavaScript context is extended to native functionalities. That is why Mobilizing.js includes, in addition to its graphic components, a certain number of general functionalities relative to integrated sensors (next-generation motion detectors, microphones, video cameras, etc.), offering a unified creative environment in a coherent and portable interface that does not limit possibilities to what the web alone allows.

From the user’s perspective, Mobilizing.js is a JavaScript library itself composed of a certain number of major classes of functions. Open source, the library is intended for artistic creation on digital devices and makes it possible to create interactive graphic applications on a variety of machines because it ensures “multi-platform” compatibility, adapting automatically to the device on which it will be run. But it also provides some interoperability, thanks to JavaScript, by offering access, for example, to the creation of both sides of a client-server system, or the creation of hybrid applications (mixing high level and native layer) on several devices using the same programming interface. Those two characteristics and the unification effort represent a major advantage for software authors because the technological environment on which Mobilizing.js is based remains highly composite, despite the ubiquity of JavaScript.

The Mobilizing.js user therefore employs the library to write scripts and therefore to create works. Templates and examples help to understand the architecture of the library and to try it out. Though Mobilizing.js caters largely to the browser, its implementation works only through JavaScript programming, and requires no direct recourse to HTML or CSS. Because this experience of programming is also a way of learning the JavaScript language itself, Mobilizing.js is favorably positioned within the contemporary technical landscape.

Closely associated with the uses it promotes, Mobilizing.js is designed as a modular and upgradeable environment, and its functions can be increased and redefined to respond better to the needs of authors. It is therefore a creative tool targeted equally to novices eager to take their first steps in interaction design and interactive creation and to specialists who hope to avoid creating software components specific to particular projects. Mobilizing.js is a generic, scalable, and flexible tool.

Technical information (alpha version of the library, 07/2017).

The current version of Mobilizing.js uses Three.js as its render library. While some of the objects in Three.js and its scene graph are used quite directly in the Mobilizing.js object library, the inheritance system by aggregation (see diagram below), as well as other elements, were built specifically, in particular an abstraction layer enabling implementation of Mobilizing.js rendering functions on a 3D engine other than Three.js. Additionally, Three.js and its structure do not appear in the user’s scripts, but only in the Mobilizing.js object library. Other libraries are also used for their efficiency in handling certain problems, in particular those relative to the use of typography (openType.js) without having recourse to CSS styles that cannot be used directly in a real-time 3D context.

Mobilizing.js assumes that WebGL will be supported on all mobile screens in the coming months, based on the observation that the operating systems of computers and their browsers already support it. The choice of investing in this technology is based on its efficiency, the rendering operations being performed by the GPU, since it is in reality an OpenGL ES 2.0 render surface managed by the OS and made accessible from a webView. Mobilizing.js can therefore be used on all browsers that allow the use of a WebGL compatible drawing surface (meaning the vast majority of browsers as well as the webViews of native mobile platforms).

Overall, and despite the fact that browsers are a prime target, Mobilizing.js tries to depend as little as possible on DOM in order to allow for maximum compatibility with other platforms: the intention is to not overload DOM methods, or even to fully free ourselves from the use of DOM in order to run Mobilizing.js in independent JavaScript contexts, for example when access to a drawing surface that allows graphic hardware acceleration is available without requiring use of a browser. To extend the JavaScript context, including that of a webView, we use the JavaScriptCore framework on iOS and the JavaScriptInterface on Android. These extensions permit the addition of features that browsers do not yet support, such as flash management or access to certain sensors that are part of HTML5 specifications but are not yet supported by any browsers.

To ensure compatibility with WebGL on platforms that do not have a browser that supports hardware acceleration, we use Ejecta on iOS. That choice followed various tests and a preliminary implementation of Mobilizing.js with JavaScriptCore on iOS. For the Android platform, the Crosswalk runtime is used with the Intel XDK.


minimal souce code sample :

    
        

        
        
        

        

        
        
        

        

        
    
    
/* 
* Mobilizing Sample "script.js" : simple_mapping
* 
* Demonstrates how to map a texture created from a loaded image
*/

function script()
{
    var M;//Mobilizing Context
    var R;//Renderer
    var camera;
    var cube;
    var texture;//texture to be used on the material
    var imgFile;//the file to be loaded

    this.preLoad = function(loader)
    {
        //the dafault loader will be used to load a file from a given URL
        //this will return a LoaderRequest object
        imgFile = loader.loadImage({url:"mire.jpg"});
    }

    this.setup = function()
    {
        //get a reference to the loaded Image, encapsulated in a LoaderRequest object as a value
        var img = imgFile.getValue();
        //check if we have access to sizes
        console.log(img, img.width, img.height);

        M = this.getContext();//get the Mobilizing Context
        R = new Mobilizing.Renderer3D();//creates the Renderer, here Three.js
        M.addComponent(R);//register the rendered as a component to the context

        camera = new Mobilizing.Camera();
        R.addCamera(camera);

        var light = new Mobilizing.Light();
        R.addToCurrentScene(light);

        //create the texture object with the loaded image
        texture = new Mobilizing.Texture({image:img});

        //create the shape, position it
        cube = new Mobilizing.Mesh({primitive:"box"});
        cube.transform.setLocalPositionZ(-5);
        cube.transform.setLocalRotation(30,30,70);
        cube.material.setTexture(texture);//map the texture to the shape's material
        R.addToCurrentScene(cube);
    };

    this.update = function(){

    };
};
            

Mobilizing.js research, development, and design team

Research and development for this software environment is carried out at EnsadLab (laboratory of the École nationale supérieure des Arts Décoratifs) in the framework of the research program Reflective interaction, under the direction of Samuel Bianchini, with support from the Agence nationale de la recherche (ANR) for the research project Cosima (Collaborative Situated Media, 2014-2017), and from Orange in the context of the project Surexposition/Overexposure.

Thanks to David Bihanic, Pierre Cubaud, and Emmanuel Mahé

Productions

Surexposition/Overexposure

Overexposure is an interactive work bringing together a public installation and a smart phone application. On an urban square, a large black monolith projects an intense beam of white light into the sky. Visible all over the city, the beam turns off and on, pulsating in way that communicates rigor, a will to communicate, even if we don’t immediately understand the signals it is producing. On one side of the monolith, white dots and dashes scroll past, from the bottom up, marking the installation with their rhythm: each time one reaches the top of the monolith, the light goes off, as if the marks were emptying into the light. On a completely different scale, we see the same marks scrolling across the smartphone screens of the people in attendance, interacting with the work, following the same rhythm. Here, it is the flash of the smartphones that releases light in accordance with the coded language. Because these are in fact messages that are being sent—in Morse code, from everyone, to everyone and to the sky—and that we can read thanks to the super-titling that accompanies the marks. Using a smartphone, anyone can send a message, saying what they think and therefore presenting themselves, for a few moments, to everyone, to a community sharing the same time, the same rhythm. And we can take the pulse of an even larger community—on the scale of the city and in real time—through a map of mobile phone network use, which can be visualized on one side of the monolith or via smartphone. From an individual device (smartphone) the size of a hand to a shared format on the scale of the city, a momentary community forms and transforms, sharing a space, a pace, the same data, following a type of communication whose ability to bring together through a sensory experience is more important than the meaning of the messages it transmits or their destination, which is lost in the sky.

Visit surexposition.net

surexposition surexposition

An Orange/EnsadLab project
A project under the direction of Samuel Bianchini (EnsadLab), in collaboration with Dominique Cunin (EnsadLab), Catherine Ramus (Orange Labs/Sense), and Marc Brice (Orange Labs/Openserv), in the framework of a research partnership with Orange Labs

"Orange/EnsadLab” partnership directors: Armelle Pasco, Director of Cultural and Institutional Partnerships, Orange and Emmanuel Mahé, Head of Research, EnsAD

Production: Orange
Executive Production: EnsadLab

Research and development for this work was carried out in connection with the research project Cosima (“Collaborative Situated Media”), with the support of the French National Research Agency (ANR), and participates in the development of Mobilizing.js, a programming environment for mobile screens developed by EnsadLab for artists and designers

surexposition

Collective Mobile Mapping : Espace puissance Espace

Space^Space began with a simple idea : the projection of a constructed space on itself. The feedback loop, to which conceptual artists have accustomed us to, becomes here the starting point of the exploration of an architecture through the projection of its 3D model on its very own walls. On each physical wall is projected the corresponding wall of the 3D model, so that the two instances of the building are superimposed perfectly in space (or spatially synchronized). The spectators, helding a mobile screens, use a specifically designed application that proposes different scenarios of interaction with the projected architecture. Because the walls of the building become screens, the three-dimensional simulation of architecture, usually produced for design and pre-visualization purposes, meets here the reality that it intended to achieve. The building thus becomes the very place of an encounter with its own projection - a simulated architecture project using digital technologies to assist design and a projection through a large-scale video mapping system, that the audience is invited to "manipulate" collectivelly through strategies made on the fly, all together.
The generic dimension of the platform implemented for Collective Mobile Mapping allows the creation of various artistic projects using collective interactivity. In this case, Centon Digital, by Christophe Domino, was presented alternately with Space^Space during the openning of the Festival Montepllier Danse 2016 and at the CoSiMa event at Esba du Mans.

Video document of the project in Montpellier :

Photos of the Mans project :

C3M C3M C3M

Design and development: Dominique Cunin (EnsadLab, le laboratoire de recherche de l’École nationale supérieure des Arts Décoratifs – Paris
Technical assistants: Oussama Mubarak et Jonathan Tanant.
With idscènes, EnsadLab, le laboratoire de l’École nationale supérieure des Arts Décoratifs — Paris, Grande Image Lab, Le Mans and l’Esbama, Montpellier.
Special thanks: Christian Gaussen, Philippe Reitz et Juan-Luis Gastaldi (Esbama), Samuel Bianchini et Emmanuel Mahé (EnsadLab).

Mobilisation

Dispositif artistique interactif mettant en œuvre la base de données du Giec (IPCC, Intergovernmental Panel on Climate Change) dans le cadre des projets ANR Medea (Mapping Emerging Debates on Adaptation) et Cosima (Collaborative Situated Media)

Mobilisation

Un mobile, sculpture bougeant au moindre vent ou courant d’air, est suspendu au plafond du hall d’accueil de Sciences Po. Ce mobile est composé d’une centaine de drapeaux sur lesquels sont projetés (en vidéo “mapping”) des informations, des lettres. Lorsque le mobile et les drapeaux bougent, sous l’effet de l’air, la projection vidéo cherche à s’adapter en temps réel au mouvement, cherche, car elle n’y parvient que dans une certaine mesure, le mobile physique reprenant régulièrement ses droits, surtout si des mouvements alentours perturbent son environnement.

Chaque drapeau, initialement vierge de tout signe, revêt des caractères typographiques, d’abord trois lettres figurant le nom du pays qu’il représente: FRA pour la France, etc. Tous les pays ainsi représentés ont contribué au Giec-IPCC, le consortium d’experts scientifiques mondial sur le changement climatique. Même s’il ne figure pas directement une cartographie, l’agencement de ces drapeaux répond à l’organisation mondiale des territoires. Avant même que le texte s’anime, on peut percevoir, pour certains drapeaux, derrière les trois lettres en question, dans la profondeur, d’autres textes, parfois nombreux, parfois moindre : ces agrégats typographiques figurent le poids de la participation de ces pays aux différents rapports du Giec-IPCC et indiquent un espace informationnel à explorer. Les spectateurs face à ce mobile peuvent en effet agir dessus, avec leur mobile, leur smartphone, via une application dédiée. Ayant choisi un pays, un drapeau, ils explorent ces informations par un travelling dans la typographie ainsi mise en espace. Les informations qui sont révélées par les interactions varient suivant des filtres dont la base de données du Giec-IPCC est constituée (rapport / rôle / pays / participation chapitre / groupe de travail / disciplines / type d’institutions / thème). En se saisissant de ces informations et en adoptant le point de vue d’un pays, les spectateurs agissant deviennent des représentants momentanés des pays choisis, aux yeux de tous : la représentation interactive de leur écran individuel de smartphone est reproduite en temps réel sur le drapeau qui leur est affecté. Prises entre les actions des participants et soumises au moindre mouvement de l’air, entre les mobiles (smartphones) et le mobile (sculpture), ces informations tentent de nous mobiliser par le biais d’une expérience esthétique singulière autant que collective.

Mobilisation Mobilisation

Crédits: Un projet EnsadLab (École nationale supérieure des Arts Décoratifs) / médialab (Sciences Po) conçu et réalisé sous la direction de Samuel Bianchini (EnsadLab), en partenariat avec l’équipe du médialab de Sciences Po

Ce projet est réalisé dans le cadre de deux projets soutenus par l’Agence nationale de la recherche: ANR Medea (Mapping Emerging Debates on Adaptation) et Cosima (Collaborative Situated Media)

Collective Loops

Collective Loops

Collective Loops is an experimental installation featuring a collaborative version of an 8-step loop sequencer, which allows up to 8 participants to join in with their smartphones through a local web page and create simple melodies in real-time by means a shared environment.

Once connected, a participant can choose one of the available sequencer time slots through a touch interface. A second interface is then made available on the device allowing the selection of musical notes that are emitted from the smartphone when the selected time slot is activated by the steady tempo. To ease the collaboration, the choices of each participant are made visible to all through a circular floor projection that mimics the smartphone interface.

Collective Loops

Design and development :