Archive for the 'convergence' Category

reblogged: When Actual Materiality Surpasses Even Real Virtuality

2007.08.06 (Monday, August 6)

HypoSurface is a disruptive material technology. It’s been plenty blogged in the last week, so I won’t bother repeating what’s already been said, especially concerning How? and Why? this works. Anyway, what draws my attention, in reference to this blog, is when I see a real technology approaching, even surpassing, what can be done when creating space using immaterials.

[continue reading@ Metaverse Territories]

NMC Arts[Photo taken from the excellent inworld Second Life art installation, NMConnect on the NMC campus]

Read the rest of this entry »

Barcoding links between real and virtual worlds

2006.09.12 (Tuesday, September 12)

smartpox barcodeHere is a Smartpox representation of my email address. It could be any text based message, data-base or other encoded information. Print it and plaster them like graffiti all around town. Then, with a java-based camera equipped cell phone (and smartpox’s java app installed on the phone), point and click to decode its contents. [via read/write web]

The map just might be the Territory

2006.08.05 (Saturday, August 5)

The excellent Terre Nova asks a non-rhetorical question, “Where am I?” and gets lots of very geometrical, topological geeky-types of responses. As elegant and as beautiful as some of these representations are, I’m not so sure that they actually help to answer the original question. In other words, I don’t feel any more there by looking at these maps of networks. Granted, the amounts of information required to make a mapping of the metaverse are copious, and the diverse types of data imply from the onset, they will be documents of extreme spatial complexity. Creating a coherent visual representation of different worlds whose scale, protocols and topologies are considerably different, is not a simple matter.

While I’m sure that the solution will somehow be based on a topologically complex structure that is inherently related to the worlds it is trying to represent, I have a hunch that ultimately, it will be a multi-sensory experience that will be closer to art than to cartography.

Although this may, in some way, imply the use of a map as we know it, the end result will hardly be a mere graphic abstraction, but rather an experience where space and mapping will converge. And while we’ve been told that “the map is not the territory,” I think this adage will be reversed in our attempt to be oriented when we jack into a metaverse –the map and the territory will become reversible.

Light that distorts matter

2006.08.02 (Wednesday, August 2)

In the category of material/immaterial mashups, here’s one from Siggraph:

Mutating material objects with light, as if they were video, with Morphovision [New Scientist].

Mashup: Object & Interface

2006.07.31 (Monday, July 31)

This short video, called Cubic Tragedy by Ming-Yuan Chuan , was shown at the Siggraph conference, and highlights some of the paradoxes and frustrations of building with the Second Life modeller. The most insightful gag shows how the edges can become blurred, through spending too much time in 3-D space, between camera, action, interface and art.

Automation interface

2006.07.24 (Monday, July 24)

The reBang post, Hybrid Reality Cocooning, is about software that connects real space to a 3-D digital representation. This is used to establish an interface for building automation tasks like controlling lights, sound, television, climate, appliances and even simple tasks like opening and closing doors, by hooking into home automation software and hardware standards. vCrib has an integrated 3-D browser and modeller devoted to debugging, simulating and executing actual home installations. (See the video).

Its integration with metaverse networks like Second Life ( SL) suggests yet another example of one reality augmenting another; whether it’s material reality augment an immaterial one, or vice versa. Essentially, the two realities are reversible as far as augmentation is concerned. From being able to control and monitor real spaces from a distance, like turning off the stove in your home, or seeing who’s at work in your studio by jacking-in to SL when you’re cross town, someday SL –the simulacra, will be put to work accessing your real life, and not just feeding off of its metaphor(s).

Art, tech and territory

2006.07.20 (Thursday, July 20)

This 3pointD post about a script, a web interface, and a location moves two approaching worlds (RL <=> SL) a little bit closer. Corey Linden speaks of “SL mirror-world space to visualize, market, and sell the data” using world-converging mashups such as this. I’m more interested in the innovations for the non-mirror world, metaverse-as-a-media built environment that scripts like this will make possible. In the spirit of geo-caching or gps art, invented in the wake of cheap, available GPS tech, what are the objects and spaces that will emerge from the convergence between art, tech and territory?

What I want to figure out is how, when and why this:

… can be, act and look more like this (click to zoom):

re: Immersion or Augmentation

2006.07.19 (Wednesday, July 19)

The SLCreativity blog frames an important (in)world issue concerning “2 views of Second Life,” immersive vs. augmented. What are the implications for the built environment, as well as for building the environment, in both real life (RL) and Second Life (SL). I’m going to attempt to re-frame the issue by considering it from the perspective of “representing something.”

Augmentation: As far as the augmentation argument is concerned, there are 3 nuances to consider:
a) augmenting RL with SL; and,
b) augmenting SL with RL; but also,
c) enhancing other metaverses with SL.
Maybe this distinction comes from an a priori bias of seeing neither SL or RL in a vacuum, but rather as 2 very intricate pieces of the other’s puzzle. Another bias I have is to consider SL a platform rather than just a game. This somehow necessitates reading usefulness into what we do with it and how we use it. Thus, if we can define SL as an interface, filter or conduit between content & container, between information & its communication, we can begin to understand SL as a part of this framework.

Anyway, deconstructing the plural, reversible nature of the transactions between SL and its’ siblings and cousins, recognizes their common effort to decode and inform the world(s) around them. This being said, the fact remains that metaverses are a specific and unique kind of container because of their dominant expressive material (media) : immersive Space (but I digress, (or do I?)).

Immersion: The idea of immersion in SL being defined as “not be(ing) contaminated by anything from the outside” is misconstrued. As I’m attempting to articulate through my work on this site (a bit laborious, perhaps, but I’ll find a way to be more succinct…) most of SL’s content is conceived by establishing a direct or indirect semantic link between a RL artifact and a SL reference. Thus, in order to access the meaning, atmosphere or usages of a build in SL –its signification, prior knowledge or experience of the RL artifact is necessary. In this scenario, the immersive purity becomes necessarily ruptured by having to access, at the very least, a mental image of the reference, or worse, a Google search, in order to understand what’s going on. In the example below…
can this be an “immersive” or even meaningful experience…

…without this reference?
For total, non “-contaminated” immersion to occur the signification system must turn to SL itself, a self-referential system, and tap into the SL deep structure, to arrive at true immersive expression.

Building standards

2006.07.18 (Tuesday, July 18)

What is a Building? What does it do? is an excellent post that got me thinking about the implications of CityGML in metaverses like Second Life (SL).

As the Directions Mag article points out, “web services provide only graphic or geometric models, neglecting the semantic and topological aspects of the buildings and terrain being modeled.” Although geospatial apps are the obvious target of CityGML, can metadata space serve as a portal between SL-like metaverses and web-service oriented geospatial representations like Google Earth (GE)?

There are discussions about the inevitable evolution of Google Earth-type spaces towards 3-dimensional interactive metaverses, similar to the SL model. The idea is compelling, but what are the protocols of this convergence? Where is the connective tissue that will mediate the spatial transformations between the geospatial “mirror world” logic of GE, to the hyperbolic virtual geometries of SL.

Objects in SL can easily represent tangible information —semantic references, topological data, relative location coordinates, and can suggest intangible architectural information — atmosphere, usage or spatial expression. But in addition, is there a way to exploit the information space opened-up by CityGML so that new, unexpected spatial or informational dimensions can unfold ?