Thursday, May 28, 2015

stemulate

I am working on a new approach to classify data in images. I have studied different file types for GIS data and the nice thing about geological data is a lot of it is quantified. Roads are given names, distances are measured, and altitude is measured. I recently looked at open street maps and they say as of May 2015 there has been a number of attempts to combine XML/XSLT/ and SVG however they are in early stages of development. The official  OSM file for the geographic system for the earth is 150 GB which is huge. Microsoft research in 25 February 2015 did an expiremental project called image composite editor which was able to understand images they were able to do image completion. I want to find a way of painting an image using SVG and find the parts that move and describe this as delta data. In the 1990's the approach was to do these changes so fast that a human cannot detect when the screen was refreshed. However when a CRT monitor appears in video these changes are augmented. Today through digital technology we have come closer to high resolution videography. However we know through quantum physics there are jumps the brain is unable to detect. we as humans are trained not to notice these differences. This is described in the psychology text book The Invisible Gorilla. The book the organized mind explains that the human mind works with a limited bandwidth. For more detail please see my paper "Jurassic extrapolation increases accuracy of speech to speech engine". by using composite images using delta files only painting changed elements I hope to create delta transform files. By using a technology similar to breaking apart elements of a GIS file I hope to use a dirative of SVG based on XML and XSLT to make composite images which can be animated. The benefit of using SVG based technology is that it is not based on pixels making it run with media queries to be rapidly rendered on any size of screen. Unfortunatly as of May 2015 it is not true that a SVG would be smaller than a raster image. Much research was given to fail gracefully on older browsers with HTML5 which is a XML deravitive. I hope to use a new format of SVG where it defines what the person is looking at such as a lake. It then uses math to determine what the lake looks like such as depth of the water and clouds that are in the image to determine shadows. I have a test project which is to make a flight simulator using images that exist in GIS data such as World Wind and open street maps that would render fast enough to provide detailed images of the scenery while the open source Flight Gear provides the simulator of the planes. This is a challenge because Flight Gear uses C and C++ to make their tools. One approach that I might make to store the data so they can be quickly queried is MongoDB which is more commonly used with python and java. Each of these languages are supported on android and kivy makes it possible to make cross platform apps. key challenges that I am attempting to solve at this point are:
size of file speed of scenery rendering needs to coorelate to speed of simulated aircraft.
threshhold of image changes for delta streams needs a threshhold of perceived changes needs to be within the perception range of human cognition. this can be measured using the data analytics toolkit KNIME. Another beneficial toolkit is mathbuntu which includes SAGE mathmatics and geogeraph.They also have LYX which uses Donald Knuth's LATEX fomat serves as a basis for the SVG format.

Although the fight simulator would serve as a "toy" project for demonstrating this technology the aim is to  serve for the physical therapy platform which would help everyone eventually as we each would benefit from physical therapy at some point in our lives.