Showing posts with label ideas. Show all posts
Showing posts with label ideas. Show all posts

November 14, 2010

Emergence and Democracy

Emergence is the idea that given sufficient numbers of simple interactions, a relatively complex outcome may result, that cannot be trivially traced back to the simple interactions. Wikipedia, which is itself a great example of emergent behavior, defines emergence as:

In philosophy, systems theory, science, and art, emergence is the way complex systems and patterns arise out of a multiplicity of relatively simple interactions. Emergence is central to the theories of integrative levels and of complex systems.

It struck me, in listening to coverage of the recent election season, that it should be possible to see voting as the building blocks of simple interactions, which should result in complex emergent behavior when it comes to the results of such elections. Having listened to pundits rave and rant on election results across the two largest democracies, there seems to be very little of this spontaneous complexity. Yes, parties win and lose, but over the generations of going through this process has not, in my opinion, produced a directed long-term behavior transcending local variations. To me that means that we are either asking the wrong questions of elections (and consequently democracy), or lack the tools to recognize emergence, or have democracy set up in a way to never achieve emergence.

The final thought is scary. Especially if you consider that most of humanity (caveats include China of course, but with the understanding that their adoption of democracy is only a matter a time) have hitched their future to this bandwagon. It appears, at least according to the superficial analysis above, that the current form of democracy is not set up to deliver on the promise of a future for humanity. The questions, therefore, are: why is today's democratic setup unable to produce emergent behavior, and what can we do about it.

When I initially thought about this, I had imagined this to be a problem with the lack of bounds for democratic emergence. Because there are so many parameters that modern democracies have to deal with, I figured the setup was not scaling in breadth. But the more I think, emergence has nothing to do with bounds. Emergent behavior changes with the change in bounds, but the behavior should nonetheless exist. Instead, I imagine the following three ideas may describe the reason for non-emergence in today's democracies.

Delayed feedback - emergent systems typically have a feedback loop as part of the simple interactions driving it. Democracy is time-delayed. Instead votes determining government actions occur every X years, while the actions themselves are continuous. This biases voting actions to the most recent governmental actions making the simple actions for emergence flawed.

Representative vs. Direct democracy - most democratic systems involve choosing of representatives who in turn make legislation. This one-removed nature of legislation eats into the continuity of feedback. There are no simple actions that vote on simple outcomes. Instead simple actions now are voting on complex outcomes themselves.

Non-uniform participants - emergent behavior requires all non-directed actions to be completed by similar participants. In other words, all voters ought to be equal. Unfortunately, this is not always so. With the Junta in Myanmar at one extreme of this example and the special interest groups in the US at another, participants in a democracy are never practically the same. This also means, the goal of pure emergence is that much tougher to attain.

This post is by no means the first look at such an idea. Joichi Ito, a Japanese journalist, talked about the idea of Emergent Democracy in 2001, and how blogs were/are going to be the engine towards making it happen. Wikipedia lists a book by Clay Shirky, called Here Comes Everybody: The Power of Organizing Without Organizations. In both cases, the organization itself is proposed to be emergent as a result of the Internet.

While it is intriguing (and rather far-fetched) to give up the current democratic setup for the promise of anarchistic self organization of societies - there may be a case for a moderately direct form of democracy leveraging the Internet. And just may be establish a true form of emergent democracy that is actually able to propel human society forward.

April 12, 2010

Science & Morality

Have always been a fan of the TED website, and their collection of talks. Having just heard one of their videos, I was browsing the site, trying to learn a bit more about them - turns out, they actually encourage embedding and discussing their videos. Cue, glint in eye. So, here we are.

Morality, in the sense discussed in the video below is the definition of right and wrong, irrespective of what people think. Sam Harris, argues that, on the contrary to what many people assume, science is capable of reaching such definite answers on its own, based on facts, and can therefore complete eliminate the need for a morality-based declarations. Well thought and presented of course - but for me the crux of the matter lay in the Q&A at the end. When asked to prove the immorality of the Burqa, Sam scientifically fell back to the answer the basically said - we may not have a rigorous proof now, but given the rate of our scientific progress, we will eventually get there.

In his answer, I believe, Sam was absolutely correct and negligent. Yes, science will eventually get there, but people need an answer now - on what is correct and what is not. People have all been created with consciousness, but a varying degree of intellect. Waiting for an intellect-appealing morality, that may eventually get here is a very bad survival skill. Instead, society taking advantage of the common denominator, appealed to human consciousness. Turns out, morality is a lot like having immortal parents. Even if you replace parents with Man with beard in sky, things work just as well. True, such a replacement has side-effects, a lot of side-effects, but at least it kept humanity going till science would eventually evolve to appeal to the most intellectually-challenged among us.

November 01, 2005

The Office of the future

I have been just checking out some of the AJAX stuff. Needless to say, the techie junkie that I am, I am pretty much blown away.


What is more impressive is not just that ability of the browser to act really like an application container than a markup display module. The movement of the web towards righer media looks more possible from the AJAX point of view rather than the world where everyone watches all their movies online. This is the rich media of the future. Simple applications, centrally maintained and accessible from everywhere.


Of course this means that you need to have connectivity everywhere. This may not be as tough as it seems. A mixture of wired and wireless access is already providing connectivity in most places - bar natural disasters. But, bandwidth is a problem that will not disappear - not overnight, not in a while. Simply put, it is not economically viable to provide high-bandwidth connectivity to everyone. Yes, a big chunk will have access to it, but not everyone.


The software world is going through its pangs of simplification. There is still a while before the dust settles and the winner is predicted. But what struck me was the relation to the hardware world. Take for example the office desktop. Even with the laptop, the desktop today is not truly mobile.


The next step is the integration of business communication with the laptop. All the elements are here already. Laptops are shrinking - they are small enough that any more shrinking can only affect productivity. VoIP is available. And most business users are already connected through adequately thick pipes. What is to prevent an integrated phone in the laptop - a tiny bluetooth enabled headset and an VoIP phone number that goes with the laptop.


That is the next generation of the business laptop. Intergrated communications - data and voice over the same channel through the same end device. You heard it speculated here first.


damn-where-is-that-phone-when-you-need-it
-- ravi

February 25, 2005

iMusic

The ability to share and enjoy music is inherent the music itself. No matter what the MPAAs and RIAAs of this world want us to believe, music is not just another merchandise. It is not like a scoop of ice-cream that you enjoy more if you have the whole thing yourself. Music is meant to be exchanged, loved, discussed and sometimes cherished.

Notwithstanding the lawsuits, the advertisements, and admonishments - music will be shared. It is not music piracy - that is something that has been shoved down our throats by the powers that be. Sharing music is natural, that is what humans do - we do not sit in our cubicles listening to music and slamming on the pause button everytime somebody comes within earshot. And humans are not thieves either. We do not live to cheat someone of their ability to earn a living. Each and every one of us understands that we need to earn a living and therefore we are willing to share a bit of what we have in order to allow someone else to live.

The mechanism that can work therefore has two primary requirements

  • The same mechanism should be used for buying and sharing music.
  • Music should be cheap, damn cheap - not 99 cents, but 5 cents a song.

Let's call this portal iMusic. Now iMusic is a p2p portal. It is probably built using the Gnutella protocol and also allow bit torrent to exchange files. Now, the network has two kinds of users - customers and authorized resellers. Authorized resellers are authorized outside the system, probably by the music agency themselves, and are allowed to "sell" music. Also underlying the the entire portal is a paying mechanism geared for small denomination high volume transactions - something like paypal. And unlike eBay that has a one way redirection, PayPay here talks back to iMusic too.

Now every user on iMusic has a "buy ratio". This is like the credit rating on iMusic. This is the ratio of the number of songs you bought to the number of songs you downloaded for free. The smaller this number is, you are a freeloader, while a 100 means you only buy music and dont download anything for free at all.

Your account 'knows' all songs that you bought/traded. Others only know your "buy ratio". You decide the buy ratio levels you want to trade with. This is something like the upload/download ratios that are currently in use on the varius p2p sites. However this tracks based on actual transactions - either a successful copy or a buy. The smaller this ratio is for you, the more difficult it is for you to find users who will want to trade with you for free. Ofcourse your Search will show you all the possibilities, making your trading process all the more tempting but difficult to complete.

Whenever you like a song, or know you will like a song and dont want to bother with Searching and downloading, you connect to one of the authorized resellers and buy a song from them. Of course, authorized resellers could be a lot more creative and sell you "plans" with them. A user can sign up with one or more "plans" or just go to any authorized reseller to buy a song. Of course why would you do that, because the more loyal you are, your reseller could positively impact your buy ratio by registering more "buys" on your behalf. Frequent flyer miles anyone?

The iMusic marketplace will thrive on the existence of opposing forces to prevent large scale piracy. Users want as high "buy ratios" as possible so that when they want to check out new songs, they do not have to go through too many "You do not have enough credit to access the song shared by this user!" messages. At the same time, a user's own threshold limit will determine his standing with other users.

iMusic will also spawn a reseller program and associated competition to drive down prices. The reseller program will be handled by a central "reseller clearing house" that in a way owns the entire iMusic mechanism. All artists are affiliated to this iMusic mechanism. Each successful sale entails a small payment from the iMusic clearing house to the artist and subsequent recovery from the resellers. Ofcourse the market between the artists themselves on the iMusic clearing house will result. If an artist prices a song too high, people will still listen to it, but will use their credit from other "buys" to download the song for free. And of course, too low means that unless the song is a phenominal hit with a high buy percentage, not enough money will be recovered.

Songs and their associated advertisement: Each copy of a song on the network is either introduced by the artist, or by users as rogue copies. Rogue copies will be abundant at first, but they will stand out in all seaches as "unknown source". All authenticated copies keep track of the number of times they were bought to the number of times they were traded, giving them a buy ratio as well. The buy ratio along with the trade count gives an indication of the top of the charts as well.

Of course, there will also be associated sites and ratings and chat sections and forums for discussion of the latest in everyone's own queer music tastes. There will be the inevitable "Save this band" promos, so on and so forth.

On the whole iMusic will be a full blown market place, allowing a direct interaction between the artist and the users with a secure, simple and small payment mechanism. It will allow sale and trade - and most importantly treat customers like customers are not thieves.

Bet you've heard this before!
-- ravi

April 26, 2004

Data Integrity Certification Service

There has been a big hullabaloo around EVMs and electronic voting. And one of the main requirements that has been consistently missing is what is called a "paper-trail".



Many detractors of electronic voting have cited lack of a paper trail is the main reason why electronic voting is being called unreliable. What is it about paper that makes a paper-trail that is more dependable than an electronic trail.



One of the main reasons is that a paper-trail is physical meaning that it is constrained in space and time. And therefore security authentication and authorization mechanisms have been built taking this constraint into account.



What if data were to be provided this type of constraint. If there was a means of constraining data in space in time - prevent duplication and correctly identify the time of entry for a data item, it might be possible to identify the integrity of any given data point.



Consider a mechanism, called the DICS mechanism. This is an based around an online trust that provides time-based data integrity mechanism. Each client of the mechanism has a two-way relationship with the trust. The client asks for and has a tie-up with the service provider.



Lets assume that the client stores data in a relational database. Lets also assume that there is a row of data to be filled in at any time. In order to do this there needs to be another column in the table that stores a data element that is a sum total of all the information stored in the row and is encrypted in a way to prevent its tampering.



The simplest way of doing this would be to concat the data in the row and use a publc key to encrypt the data and store it in the column. The problem with this approach is two-fold. First, there is no time-information, and secondly it is possible for the single entity having access to the system to also have access to the private key, rendering the entire process unviable.



To overcome this, we introduce the entity called the DICS.



DICS <-----------------------------> Client



Assume that there is a row of data that needs to be protected. Call this data as the variable x. (assume that the data in the row is either concatenated or otherwise combined to get this single entity)



Other data variables used are

d and d' which are the private and public keys of the DICS trust while

c and c' are the private and public keys of the Client



f() and f'() are the encrypt and decrypt functions used in the two key algorithm. (data = f'(f(data,c), c') )



The process starts with the client that starts and calculates the values of 'a' given by var_a = f(x, c'). This data is sent across the the DICS server.
The DICS service calculates var_b = var_a + t, where t is the time-stamp. The '+' is a defined combination functions with a simple inverse defined.

The service then returns var_c = f(var_b, d') to the client, while storing var_c and t with itself. The service does not return 't', though as you will see, that is not an issue.

The client stores var_c in the column of his database.



The service opens a simple functions to the client checkData().



Now checkData() sends across the value x recalculated using the same concatenation function and data from the client database along with the var_c stored. The service can then recalculate the value of the var_c from the given x and the time_stamp stored to verify the authenticity of the data.



Now time value is stored with the data, and hence in case of errors with data authenticity, further tests can be performed. Also time-reports can be taken for various requests to detect if there has been any descrepancy in the usage reported by the DICS service and actual usage done by the client.



Secondly any person changing the data needs to get the data re-authenticated by the DICS server showing up as discrepancies between the number of data entries and the number of data authentications. Also if there has been no access to the DICS service during the update/change then the data is directly found to be wrong.



sounds good?

- ravi

January 21, 2004

Personal Programming - Requirements

A Personal Programming Platform (P3) is a programming platform, that is built to be a very portable and easy to use programming language/environment. The following are some of the requirements for a language to be called P3.



Small Footprint: This is the most important requirement for a P3 platform. The entire programming and executiong context, including the user files and output directories and any other resources should all be available as a single unit - a single directory tree. And this tree should not in any way be tied to the system on which the platform is executing. User files should be forced into the same directory structure. Installation should be little more than unzipping/installing. There should be no references in the Registry or in the System folder or wherever. The entire P3 platform should work out of a single directory structure - by design, even if that means sacrificing of some flexibility.



The idea of a small footprint is portability. I should be able to take along my entire development, execution and data contexts with me in one move when I go. A single tar or zip command should be enough to allow me to migrate to a new system. A single delete command should remove all traces of me having worked on the system. Simplicity and ease of movement is the hallmark of a small footprint P3 system.



CLI & WIMP enabled: The main execution context of the system should be something like that of MATLAB. When the program is launched, the user will end up in a command line interface (CLI). Here interactively he will try to accomplish tasks that he will want to do. He will be able to run a command or a script, invoke a IDE to develop and edit a script and also bring up a debugger in case the scripts are misbehaving. During and entire session with the programming language, the CLI will be the base for working. The Python CLI is a very good starting point, but of course some changes will be required by it too.



Additionally, there will be a tight integration with the WIMP style of working also. For example, when a script is worked on and perfected, it should be possible to move that as a "macro" to a toolbar button with an simple command. The next time, the script will be available by clicking on the toolbar.



Run Time Database: This is the most important requirement of a powerful P3 environment. During execution of any script, it will have access to a SQL style database. This need not be a true database, but it will automatically be available to any script or program when it is running, without additonal programming effort. This run time database should support SQL style queries, though it need not support the entire command set. A smaller and modified SQL command set, dSQL (dynamic SQL) needs to be supported by this database.



The database is not built for performance, or to support deep nested select statement. Rather, it provides a easy to access database system, which can be use by any executing program. All data that is not hardcoded will be part of this database. All runtime information is best accessed through this database.



Of course, this database needs to be persistent. Shutting down the program should not destroy the data.



The database not only stores information which the program needs for its execution, but it acts as a data source for many other data requirement calls too. The database will define a schema that will allow users to "query" for information like "current date" to a "random" number etc. Most function calls will be reduced to a database calls. Function wrappers for these calls will also be available.



Powerful synchronization: This is another requirement. The P3 is designed to be used by users on the fly, on the move, and not necessarily when they are working out of their base system. As a result, it should be well aware of all changes that are being done to it. When one or more P3 instances are brought together, each to be able to talk about the differences it has with the others. This will include not only the changes to the data and the scripts, but also any changes to the internal scripts etc.



Easy customization: All customizations should be easy to track, package, distribute and incorporate. However, all changes should necessarily be in accordance with the guidelines above.



does it make sense?

- ravi

Personal Programming

This is an idea that has intrigued me for some time. It came to me when I was thinking about doing something to automate my website. It is a pure-html site, and I wanted to do some kind of automation of updation of links of each page when a particular page was altered. A very obvious means would be to use a "content management software". But I was too lazy to search for one that fit my bill. Because my requirement was simple, change a links in each page automatically. Something along the lines of a simple perl hack, rather than anything else.



I am not averse to computing, but at the same time I am lazy. Perl is made for me, but then there are some deficiencies for my purpose. Perl promotes laziness, but after it is up and running. I want something that promotes laziness _always_.



And it should be non intrusive. It should be intelligent. And it should keep everything with it and not make me think. And it should be portable. I will try to come up with a wishlist in the next blog. Do you know anything that provides such a platform?



Anyways, forward the requirements,

- ravi

October 14, 2003

logic...

I was talking today and I got this idea.



Think about the time when you were faced with a problem. Not a problem which is defined in terms of problem -> ? -> solution. Rather a problem which is like a mass of unknown mass. All you know is ?? -> ?? -> ***. You only have a vague idea about what you want, and what you can get depends on what you have, and what you have is not totally defined and how you are going to do it is also not defined.



Typically called an unstructured problem.



The items in this drama are these...



1) unstructured mass of information or data

2) unknown process

3) approximate output is known.



Think about a diamond cutting process. When the diamon cutter gets an uncut diamond he has something he does not truly understand. He has a process, but does not know how to use the same. He knows roughly what the final output looks like.



Think about the process as logic. You have the tool, to cut the unknown mass of data, but dont know how to use that tool. Coz there are a billion ways of using it, just like a billion ways of cutting a diamond. But only a few of them give out the clarity which a diamond can give. And that is the way of using logic to cut the information and classify it so that you get that clarity that guides you towards the solution.



The diamond cutter cutting the diamond and the problem solver solving an unstructured problem.



In fact to facilitate this process of problem solving and making it more of a repeatable process than an abstract art is the effort of a majority of mankind. There is one such method and its name escapes me, googling too does not seem to help.



Anyways, liked the analogy so put it here,

other stuff,

- ravikiran n.



PS: new .sig, more corporatish, and massively scaled in terms of data content

October 01, 2002

Rammstein

And more specifically stripped. Incredible song. Or for that matter kokain. Man those riffs just drive you out of your mind dont they.

Filled up a survey today, about some perception thing, of companies recruiting on campus. Was so totally painful. I dont really understand. Why did I spend so much time filling it up. There was this HUGE matrix, which had to be filled with my opinions. Someone did not tell them things properly. I dont have opinions. Not atleast as many to fill up that monstrous matrix of theirs. Well, I did try, for a while. As i tried to form opinions on the spot and them put them on paper. Do you know how hard it is to form opinions on the spot? It is. And if you are finding it easy, you dont form opinions, you just think you do. Trust me on this. :)

One of the most incredible things is the fact that most people around you dont bother to form opinions. They have a few of their own opinions. You can figure out that this is their own opinion when people can be completely irrational about it and its consequences. But most other opinions you see around are only the sum total of the opinions formed from the positive part of your sphere of perception, that is all.

Okay, enough of rambling. Lets continue with the discussion we were having last. In the last post, we talked about the a number of definitions that led to the definition of the LSI or the linear scale of information. Given any observation, it can be located on this scale. What is an observation. An observation is any representation of a Data Source or DS. A photo is an observation of some reality. A word is an observation of some idea. A poem is an observation of some emotion/idea. A simple sentence also is an observation. So is a complex mathematical model of the universe.

One peculiarity about the LSI should be kept in mind. The LSI stretches from 0 to infinity. It is unbounded on the upper side. This means that a DS lies at infinity, and a completely useless bit of information lies at 0. We define data to lie in the small reaches, closer to 0 on the LSI. Information, relatively is higher on the scale. It represents a higher richness of data about a particular DS. Knowledge tends towards the object itself. A picture, worth a 1000 words, is therefore higher on the LSI with respect to the words it replaces.

This can be extended to any object, idea, thought or any other information content without any modifications. We can therefore use this structure to compare and develop better and higher forms of information management systems. That is what is envisaged as the end objective of this study. This structure can be used to describe any informational content with ease. We will go into details about the implementation of this structure soon, but before that we shall look into the way this method can be used to model interactions.

We define an interaction to be a process that allows for transfer of data between a DS and a DA using a Data Transfer Medium. This is the simplest definition of an interaction. An interaction can give rise to one of the following results. Information will be transferred from the DS to the Data Acquirer. In addition, the DS can change its state due to the interaction of the DS with the DTM (also known as the medium). Further, the interaction between the medium and the DA, will cause changes in the DA. Note that these changes are in addition to the simple transfer of information that can be attributed to the interaction.

This in fact follows from the defnitions we had seen yesterday. We have already talked about a query that is used by the DA to get information from the DS. Now when the query travels from the DA to the medium, the medium has obtained information. This causes a change in the medium itself. When the query is transported to the DS, the DS undergoes changes because of the informational content in the query. The exact similar process occurs when the DS replies with the answer to the query. The reader may note that no change occurs in the DA during the asking phase of the query, and no change happens in the DS during the reply phase. The DTM undergoes change twice, with both the query and the answer.

Lets see some practical explanations of the entire structure. Any systemic structure can be abstracted using this. In fact, now with the addition of the term interaction, we can now model dynamic changes in systems too.

Mail me, if you think there is some structure that cannot be abstracted using this framework. We will go into more practical considerations using this framework in later posts.

This is the first time that I actually continued a post beyond just one post. That must mean, I dont really think this idea to be crap.

Regards,

~!nrk

September 29, 2002

Data, Information and Knowledge

This in short is the brief history of the universe. I am not writing this blog after being overly fascinated with the book with the title that sounds obscenely like the quip I just quipped. In fact I think I read that book a long long time ago. What I am writing this is because I have an alternate view of the universe. A view that breaks down everything into a point on the line, defined with ends of data and knowledge, with information lying in the middle. I know I am getting a little too abstract here, but then I hope things will be clearer to be as we go along.

When I used to study science, something struck me as very odd. Physics, especially of the variety that is normally taught in the high schools, breaks down the universe into two sections. The physical universe and the law that govern this universe. What struck me as odd, is the fact that god actually defined such a cute little dichotomy in his world. Just like we have data and instructions, male and female, good and bad, we have matter and laws. Okay matter, energy and all that dark crap too, but basically the tangibles and the intangibles. Okay, this is not also very true, but... Wow, this is tough, getting the definition right. But basically the problem with god and his universe boils down to this - how did he come up with something that exists, and then threw away a hell lot for us to discover. Why all this segregation? Why this duality? Why was matter there, for all of us to see, and the rest of the relationships, laws etcetera for us to discover?

But then think about it. What was matter. Ask someone in the dark ages, (dark ages NOT defined as the time before the computer) and their *ologists will tell you that it is nothing but a combination of air, water, earth, fire and something else. Somewhere down the time line, people will tell you that matter was made of unbreakable balls, called atomz. Then people went berserk. Matter was made of all sorts of strange. mystical and mythical substances, which incidentally no one can see, but ought to be there for matter to make sense.

So what was different in matter in the dark ages and now? Nothing. It is the same old matter burnt and forged into different shapes, but still the same old matter. What changed is information. The information known to man and this knowledge has changed the way people look at matter. If this information was not available, a lot of people in nagasaki would have been nth-generation residents, instead of what they are now. Matter has changed because of what matter is to different people. To the ordinary man, matter is nothing more than just earth and air. Hence what is important about matter is not matter itself, but information about matter. What we see as matter is nothing but the extract of the information conveyed to us by the various input devices.

We will now look at a totally different way of seeing the universe. A way in which there is no difference between the various units of matter, energy, ideas, minds and everyother thing in the universe. This unified way of looking at the universe is going to help us define the entire universe on a one dimensional scale rating information content. This will then give us a powerful way of dealing with many problems on a vastly simplified, unified methodology.

But before this we need to get some basic framework necessary. We postulate the existence of three different types of entities in this universe. The first is the Data Source or the DS. The Data source is characterised by the fact that it contains data. It owes its existence to the data it contains. There is no restriction on the data it contains. Of course we havent defined what data itself is. But we are deliberately not defining it, since it will be globally defined with the circumstance under view. And moreover, we cannot define it in isolation from other units underconsideration. Now the second entity we postulate about is the existence of the Data Acquirer or the DA. The Data Acquirer or the DA can query the DS for data through the use of what is known as a Data Transfer Medium or DTM.

Given these basic units, we define some terms. The first term we will be defining is the Data Completeness (DC) of an entity. DC is defined as the relative content of data of a particular kind in a particular Data Source. Hence DC is defined for a DS and Data Type. For example a DS has 100% DC about itself. Any DS can answer any question itself. So its DC is complete. Note that DC is independant of the query for data, or the way the query is designed, or the DA itself or the DTM for delivery. The actual response of the DS to a query is a function of the ability and capacity of the DTM and the DA.

This leads to an interesting and obvious statement. Any DS is 100% Data Complete with its own data. In fact, any entity, which is 100% DS with the data of any DS, is virtually indistinguishable from the DS itself. This is because the said entity can answer any question about the DS. This means that any DA cannot distinguish between the impersonating entity and the Data Source itself.

Now the DC itself does not give any powerful medium for expressing data relationships. Since the DC is fully defined with the data type and the DS, we define another term called the Relative Data Completeness or the RDC. RDC is defined as the relative compelteness of data given the DS, the DA and the DTM. For example, a still photograph has an RDC of close to 100% for the original static setup, given that the DA is just seeing them with just sight as the source of data input. The moment the DTM expands to include say touch, the photograph no longer has 100% RDC.

The RDC therefore gives a powerful medium to express the quality of data relationships between the DS, the DTM and the DA. We will dwell more on various examples for these terms in later posts.

Data is always handled in packets called observations. This observation is not the observation that is defined for an experiment. Observation is a taggable block of data. Observations differ from one another in their quality. Observations are generally substantiated by data. The amount of data represented by an observation is its relative richness. Richness of an observation is defined on a scale called the Linear Scale of Information or LSI. Data is one end of the LSI scale, while knowledge is the other extreme. Information is lying in between. Data are the small individual pieces of information, that border on indivisibility. Knowledge is completeness of knowledge. An entity which has a 100% of Data Completeness (DC) is perfectly knowledgable, and can infact replace the DS itself. An entity having an RDC (Relative Data Completeness) of 100% implies that for a particulat DTM and a DA, the entity appears to be the DS itself.

We will stop this round of definitions here. Check back for more data and information on these terms soon.

tada for now

~!nrk

September 08, 2002

Context Sandbox

This came to me when I saw some students carrying their CPUs to the presentation for a course. There is this course we have. Dealing with databases. And for that course the students have to do a project - a program. The platform could be anything, as long as it used databases for functioning - ASP, VB, VC++ whatever. And at the end of the term, the students taking that course have to make a presentation.

For the presentation, the students had to carry their CPUs to the professor. Why? Well basically, the architecture for the project required the students to connect to a remote database. And this was generally a painful process, considering that we are not doing Computer Science here, and most of the times, a final connection is establised through a lot of trial and error. By the time a connection is established and a connection string is finalized, a number of changes would have been done to the system that the student is working on, and the student would not be in a position to replicate the same on another machine.

Thus, this database project used to fail regularly, if the student just carried around the program as code, or as the executable (remember the DAOs and ADOs required were part of the OS) or in any other format. This almost forced the program to run only on the machine it has been written to run in. Thus we saw people carrying CPUs to and fro the prof's room for the presentation. The idea was that the all that needed to be added was power, and the program would run.

What do we have as part of a computer system. An application and data. Right? Wrong. There is also a context. The context is the executing environment of the application. Now in modern computing, this context is defined by the OS to an extent. And the context is realised by .dlls, APIs etc. The idea being that those entities not inherent to a particular application should be outsourced and be maintained by another party, or the OS. But look carefully, there are a lot of cases when third party tools are installed only for a particular program.

Lets look at some examples from the Windows world. Plugins into programs are one such set. Say Photoshop plugins, or Internet Explorer plugins, or Acrobat Reader plugins. For most of the scenarios, the application (Acrobat) and the data (.pdf file) alone would constitute the complete context. But for say some other scenarios, the plugin would also be a part of the context. Without it, having both the application and the data would be effectively useless.

Lets look at another example. Codecs. Say you have an AVI file. An AVI file normally can encode its video and audio streams in different formats and the application requires codecs to understand the two streams. Now what good is a great movie (data) on your machine (media player) without the Codec.

The above examples illustrate the need for an execution context in addition to the application and the data. Now lets look at what a context sandbox is.

A context sandbox is that minimum amount of information which will allow an action to be performed on a secondary machine, when the action is currently being performed as such on a primary machine.

There are some qualifications to be stated here:

  1. It is assumed that the primary and secondary machines are fundamentally capable of performing the action. In other words no definable context sandbox can exist for your washine machine to play your favourite movie.
  2. Information will be assumed to mean only that relating to software. Software will also be loosely defined as a sum of data and instructions. This means that information such as "Go get a life, buy another mp3 player" is not a context sandbox for a primary machine which is an mp3 player.
  3. Quality of performance is not an issue we will be dealing with here. Fundamental capability does not promise quality of performance.
  4. A machine is defined as the sum of all units that allow performance of a particular task. This includes hardware, software and any other environmental issues including power, temparature etc.
  5. Performance "as such" implies without change to the machine. Of course the machine being as described above.

So that is the idea. We will look more into ramifications of it in future posts.

Watched Memento today. Really kewl movie. This is the second time. Nothing new was learnt, but spent some time on the nuances of the amount of overlap the screenplay writer allowed between the scenes. Really well thought out.

~!nrk

September 01, 2002

Sphere of eConsciousness

I had defined a term in my post on Tuesday, August 20, 2002. The term was Sphere of Perception. I had said that it is the sphere which a person identifies and understands. It defines the region from which inputs are used by people for learning.

There is a related term, I want to talk about. eConsciousness. Offline, your entire perception is defined by your senses. Your sight, hearing, feel in addition to smell and taste. So what are your online senses? What part of the online life are you attuned to? What is your source of information from the online world? To round it off, what part of your online presence are you attuned to? What is your eConsciousness?

We shall define eConsciousness loosely as that part of the online life you are attuned to. This basically consists of two parts. The first part is the part of the online life you actually know and identify. What exists on the World Wide Web. The answer to that question is the first part of eConsciousness. The second part pertains to that part of the online life, which you truly understand. What is the part of the life online you truly know. What part are you comfortable with. What part of the online life you know makes sense to you, and is not a source of fraud waiting to happen.

Why did I get this idea? Well, it goes like this. I got this message from SmileyGram for an E-card. This was basically from a friend of mine, who apparently had given my email address. I had to go and check out what was happenning there. So I checked the URL and pointed my browser to it. It took me to a page which needed 5 names and email addresses to show me the page. And nothing was optional. Of course i filled it with ****@****.com to get to the page.

I then mailed my friend asking her not to put my email in such forms again. I have been long enough on the web to have my name in a decent number of email databases. I really dont want another one getting hold of mine. The reply was along the lines that this was a 'good site' and that it 'just' needed 5 friend's names and the card is 'worth it'. My friend is showing remarkable eConsciousness of the first kind, not much of the second variety.

This I think is the case with a majority of the users. These are the users who click on the funlove messages. These are the people that download and run cute screensavers with trojans and backdoors. These are not people who are exactly alien to the online world. In fact they know of more free email sites, more free ecards sites than the first 3 pages of google. But they dont really understand the net. If we are to make the net a more secure place, it is this set of people we have to target. It is this set that must be told that not everything on the net is what it seems. Educate them, and we just might become safe in spite of all that Microsoft has planned for us.

One day I am going to develop an eConsciousness quotient.
Should help. Dont you think. tell me if you think so, or if you dont. [of course after demungle the email id]
Hmm, the winter is coming. Things are becoming cooler, and I have started skipping the baths. :)

~!nrk

August 30, 2002

Absolutely Nothing!

Anarchy - the next form of government. Anarchy Rocks!!!

Welcome back. Oh well, I just wanted to welcome myself. Reborn. Well basically I had these exam things. They lasted 5 horrible days. Had 8 of them coming at me. Of various hues and sizes. Some qualitative, others quantitative. Some were a strain on the brain, some on the memory, some on the heart and others left a deep scar on my psyche. And totally change the way I thought about nothing.

Have you ever thought about nothing. No, I mean it. Did you ever think about nothing. No, not vacuum. Not that. Just about nothing. No, not even void. Just nothing.

For when we talk about vacuum, we talk about the absence of matter (as we know it) in space (as we know it) but surely inhabited by a whole lot of other stuff. For example, there is a hell lot of light in vacuum. There is a hell lot of other energy, or matter, or non-matter in vacuum. I am talking about something even more bare as compared to that. I am talking about nothing. Absolute nothingness. Where there is no time, for there is nothing to identify time. There is no matter, no energy, for either of these would immeadiately make it the swarming space we have here. And there is nothingness pervading everything. And that is so thick that it is almost as if there is nothing there at all.

Yeah, I know. You are thinking that all I am talking about it just a play of words to impress you. What if I tell you how to get there? What if I give you a series of steps which if executed will get you there? Then you will believe me right?

Let me try. Do you know where the true nothing is? Inside you. Absolute nothing is your consciousness. Absolute nothing is your intelligence. Absolute nothing is your sense of "I" or "ID" or "IS". That my friend is the most absolute of nothings which you will ever get to. Did you ever bother thinking about consciousness. There have been all forms of inferences to it - as a place. As a plane in space. As another dimension. Others have dismissed it a nothing other than just a bunch of neurons. I dont know. I believe that, there is something that distinguishes us from non-intelligence. I believe that we are not just the sum of several atoms. It is too easy. If we were, that would mean that we are symbols of random sparks that happened sometime ago.

Dont get me wrong. I dont believe that we were either from the 7days of work or from the golden lotus from the lord. Far from that, I accept that we were from random sparks of lightning in amino acids. But I believe that something happened then. We were introduced to nothingness. An element of basic consciousness was transferred. There appeared a means of trapping nothingness in the fabric of matter. And that became life.

So that my friends is a theory that I came up with, to just pass some time. Listening to "Symptom of the Universe" by Ozzy. Sexy strings.

Did I tell you, my speakers rock! They are oh so incredible incredible.

The next song is "Am I going insane", how appropriate.

~!nrk