Sunday, 8 March 2015

Coarse code representation in brain

Although neurons can be considered having active state or not, but brain at its bare neocortex layer  doesn't understand binary information or processing binary data. I mean the neocortex, for example doesn't represent a number like 6 as 0110 and ... It seems that the coarse coding is the way brain understands things, numbers, shapes, images, ... all has a coarse code representation.

Let us see how it works, first numbers. What we had and described in the previous example doesn't work here because our brain is not a binary processor, although we understand this kind of processing the understanding is a knowledge we have achieved via a hierarchical memory. So the simplest way will be like what we described in "Understanding Coarse Codes". Suppose we want to represent numbers from 0 to 9,999, we can do it by allocation 10,000 neurons each representing a number, but this is not that efficient. From the post we mentioned we have:

Number of neurons in u-v (fine resolution) coordinates: 10,000
Single axis high resolution: √100,000 = 100
Single axis low resolution : 5 // we simply consider it
Number of coordinates: Round(100/5) = 20
Number of neurons in low resolution coordinates: 20 * (5*5) = 500
Efficiency = 10,000/500 = 20 or 2,000%

So to represent 0 to 9,999 we need 500 neurons, in the form of 20 overlapped layer of coordinates each having 25 square or neurons. So just with 500 neurons having maximum active of 20 neurons which gives 20/500=4% we can identify a number between 0 to 99,999.

Thursday, 5 March 2015

Converting numeric data to coarse code

dense vs coarse code
for 16 bits number in
dense and equivalent
13 bits coarse code.
OK, suppose you have a single variable that represents the current value of a sensor. A network usage traffic, the heat bitrate of a human and ... at each time when you read the sensor value you just get a numeric number, suppose it is an integer number in the interval [0,65535] . If you want to store and process it in a dense format you need 16 bits (or neurons) to show the sensor output in dense binary format.

Let us get back to the previous post and represent the 16 bits number as a 4 × 4 square, like the picture. So with this 16 bits memory page, we can show numbers from 0 to 65,535. As you may remember if we want to use coarse coding to represent these 16 bits we need a total of 13 bits, as shown in the picture. Note that as the size of the pages get larger and memory grows the efficiency of gets better and better. But for now, forget about the efficiency, just try to understand how it works.

Where is the problem?
The problem is our storage systems or models are built/designed to store dense binary data. Even our imaginations digest and understand these kinds of storages better. So we have no choice other than figuring out a way to represent this 13 bits in an ordinary dense binary system. This may also cost us some coding and decoding procedures too.

Tuesday, 3 March 2015

Understanding Coarse Codes

It turns out that human brain uses coarse coding to store information it gathers. In fact, this might be the easiest way to store this huge amount of information we gather as we explore the world and live through our lives. In a nutshell, we just store specifications of a the things we see or hear or touch and ... There are neurons in our neocortex each related to any of the concepts our senses understand like big, small, square, circle, red, green, hot, cold and ... The corresponding neurons of these concepts are linked to each other and build something like a Bayesian Network, there are even some mechanisms for the conditions and probability for moving on network edges!

Coarse coding helps the brain to have accuracy when measuring or comparing things, in fact if we have 10 thousands of concept related neurons to figure out a seen or voice or ... we may just need to set 10 or 100 or 200 of these neurons active not more, we will talk about these in later posts.

Sunday, 1 March 2015

More about SDR and brain memory

We saw we can represent a neuron memory layer as a matrix like the following:

[
  [a11, a12, ... , a1n],
  [a21, a22, ... , a2n],
  .
  .
  .
  [am1, am2, ... , amn]
]

And since it is a bit difficult to show it as an m × n matrix in web we prefer to use the following one row form:

[a11, a12, ... , a1n, an+1, an+2,... , am×n]

Human brain stores information in the form of sparse storage, not dense. So you have for example a 10,000 bit of information which only small portion of them, perhaps maximum 5% are active. Each neuron represents a bit of information.

Thursday, 26 February 2015

Human neocortex

Before all else let me say I have no deep knowledge and experience in biology or neuroscience so most of the things I'm going to talk about is based on the Jeff Hawkins and some other scientists papers and work.  I usually like to get ideas from nature and then design software systems based on these ideas, because this is an already proven way that a complicated modular system works. So I listen to these guys and try to understand as much as I can, then just use the inspiration, nothing else.

OK ... I've always been thinking that the computer technologies we currently use, can't give us enough power to build something as intelligent as a biological creature. We are quite far from even building something like a mosquito, why? Because I think we have chosen not quite right path from the start.

Tuesday, 24 February 2015

Sparse Matrix Storage

Why do we need to work with sparse matrices or data representation? Look at the following video from "Max Planck institute of brain research". It shows the working neurons of a small part of a mouse cortex. While there are millions of neurons at this part of the cortex, only small amount of them is active at each time even when the brain are doing something complex. Yes, this is the way brain processes and works with data, you can assume these neurons as bits of nonzero information and the whole surface and the rest as the zero elements of a matrix or memory in every single snapshot.  


Active neurons in small part of the mouse cortex
(Max Planck institute of brain research)

A dense matrix has mostly nonzero elements so in order to design a storage for it, it is better to store it like a typical RAM storage we talked about in the previous post. So an m × n matrix of integer numbers can be stored in a RAM, so each element address (i, j) can be represented as a single address (m-1) × i + j. You have to remember that memory structure in a computer is in fact like a matrix, there are banks of memory so you can assume that each bank is something like a row of a matrix too.

Saturday, 21 February 2015

Search in Random Access Memory

Image from www.shutterstock.com 
Random access memory is simply an array of directly addressable elements like Bytes, Words, Longs, ... or even Strings and ... Each memory element is directly addressable by its distance from the first element, this is what we get used to it and picture it as arrays in programming languages. If you are a database programmer then you may simply show it by a two columns table, the first column as the primary key or address of the element and the second column as the content of the element.

So if you have an 8 bits address register, then you can address up to 2^8=256 elements, for 32 bits you can address up to 2^32=4G elements and for a 64 bits you can have up to 2^64=16 E (Exa) elements. It means that a typical 64 bit processor can address up to 16E Bytes of memory in theory, although they don't do it (I think most of them are using 48 bits at the moment).

Sparse Distributed Representation

I don't know why, but we have learned that we somehow always are in out of storage memory or disk space situation. We have learned we always have to compress data, for example, to show 256 simple ASCII characters all we need is having 8 bits. Even this much of representation doesn't satisfy us and we go for compressing data using mathematics formulas and ....

I never forget the time I started to study and work on designing databases, tables, fields ... all was about normalizing and minimizing sizes as much as possible. Why should you keep duplicate information? This is a bad idea to have some pre-calculated fields in tables! and ... But today, you can't have applications with tons of data and insisting on having normalized data design or even relational design.

In fact, you may have heard that relational database design is better for programmers, make programming easier, while if your concern is dealing with huge amount of data, then the relational model doesn't work and you need to go for something else. Even the way we are keeping information in memory.


5 active bits in dense and sparse memory.

Tuesday, 17 February 2015

Bug and instability propagation, working with numbers

Look at the following numbers which show the number of lines in some famous software applications, I've got them from informationisbeautiful.net:

Average iPhone app:          50,000 =   50K
Camino Web browser:         200,000 =  200K
Photoshop CS6:            4,500,000 =  4.5M
Google Chrome:            5,000,000 =  5.0M
Fire Fox:                 9,500,000 =  9.5M
MySQL:                   12,000,000 = 12.0M
Boeing 787:              14,000,000 = 14.0M
Apache Open Office:      22,000,000 = 22.0M
Windows 7:               39,500,000 39.5M
Large Hadron Collider:   50,000,000 = 50.0M
Facebook:                62,000,000 62.0M
Mac OS X Tiger:          85,000,000 85.0M
.
.
.
Human Genome:         3,300,000,000 =  3.3T (!!!)

I don't know how much these numbers are accurate, but even if they have 50% of accuracy there is still something we have to think about:

Wednesday, 11 February 2015

Chain reaction model for bug propagation

In two previous posts, we tried to see the effect of having a modular topology for bug generation in a development cycle. We saw that if we add some new lines to project and edit some previously written lines then we have the following total bugs in the modules we updated:

DDB =   ε Σ EΣ Ni    (2)

Σ Ei  is all the edited lines count and Σ Nis the all new lines of code count. ε is the ratio of bugs per edit lines and is the ratio of bugs per new lines of code. And for neighbors effect for a full mesh we considered jut one level of impact, which was:

IDB  =  ψ(n-1) (Σ E+ Σ Ni)   (1)

ψ is the ratio of the bugs per newly added neighbor's bug. We also saw in order to bring down the side effects or indirect effects we can have a software module topology design for our application. So for a sample 4 module fully connected mesh in first layer and then another fully connected mesh for these 4Ms we have the indirect bugs as:

Tuesday, 10 February 2015

The effect of software topology in bug creation

In the last post, we implicitly assumed that we have a fully connected mesh topology in our software modules while there is almost no application in which all modules have a connection to all others.

A not fully connected mesh topology
Although for example in the picture if you consider module M2 doesn't have any direct connection to M3, we can assume effects on M1 caused by M2 can has effect on M(via M1), but since we are building a simple model we ignore middle-men effects so we simply accept side neighbors effect at the moment.

We have to find out how we can fix our model to support such a topology. The answer is: WE CAN'T unless we have a model for our software. So let us build a general model for the software.

OK, suppose there is a restriction in the application that any 4 small module can have full mesh connection and the whole application itself which contains many 4M units can have fully connected mesh topology for its 4Ms.

Monday, 9 February 2015

A model for software development bug calculation

The thing I'm going to talk about is just a sample study of why software applications get complex and buggy through the time. I'm trying to build a simple model for software application's bugginess, so we need to have some definitions first and a scenario, let's get started.

First we all know, software applications should evolve because their environment, users, and needs evolve. So we have to modify or add features to applications to support the ecosystem evolution.

There are many metrics to measure the software quality but for our simple model let's just deal with "bug per lines of code". I've seen the average of say 15 bugs per 1000 line of code in papers, this is just a number to have a feeling of what we are talking about, we are going to use symbols instead of numbers. I just want to emphasize that even advanced programmers write code with bugs. We use two type of symbols one for new line of codes and one for edited lines of code, why because the effect of these two on generating bugs are not the same (will talk about it later):

Tuesday, 3 February 2015

Software longevity: Memory & Hard Disk

If you ask me what are the main factors which don't let even carefully designed, developed and even tested software continue working in long run, I'd say "Memory & Disk". These two factors always get underestimated by programmers or get no attention at all. Remember from the previous post the reason of NASA's Spirit rover problem was a disk full.

Why do we always assume we have the required amount of memory and disk space all the time? Why our tests don't show these problems? There are many reasons let me bring some of them here:
1- Wrong kind of tests: We usually don't test developed software in long term or don't put them under test for weeks or months. This mostly happens when you are new in software testing and don't have enough experience, and you get excited when it works for weeks, this doesn't mean your software doesn't have memory leak or doesn't waste disk space.
2- Don't have enough time: Even if we test the software in months we usually can't test it in years, which many software systems should work in years without restarting. This one happens when you are under pressure to deliver the software as soon as possible. 
3- Fear of failure: We get emotionally involved in software development and the developed software itself. This one has good and bad sides, one of the bad sides is that we always have some fear of failure when we try to test the software, so we usually test it gently and as we see some signs of acceptance we finish the test.  
4- Not in a real environment: If you think the user puts a dedicated hardware for your software and gives your software enough memory and disk space, you are wrong. They may do this at first but sooner or later they will install other software systems on the machine. Even if we test the software in months and apply any kind of test procedure to it, we usually forget that this is not the real environment the software is going to work.

Monday, 2 February 2015

Software fails, it is inevitable

If you search the web for big and famous software bugs or failures you'll find plenty of them like the followings:
  • The famous Y2K problem in the year 2000 we experience.
  • Similar to Y2K, we will face a problem in 2038 in which the Unix timestamp variable which is a 32 bit integer gets full, why just to this simple math 2,147,483,647/3600/24/365 ~ 68 years then it started from 1970 so 1970+68=2038!
  • The famous northeast blackout of 2003 was the result of a race hazard in monitoring software.
  • The Mariner 1 crash was because of a wrong piece of the program. The programmer should have used the average speed in a calculation instead of the instant speed.
  • NASA's Mars Polar Lander loss of communications happened in December 1999 was because of software error.
  • NASA's Spirit rover got unresponsive after landing on Mars, because of storing too many files on its flash memory. The problem solved just by deleting the unwanted files.
  • ...

Why do such things happen? 
We sometimes think this is only us, I mean ordinary programs in ordinary companies who make mistakes or write programs with bugs, but this is not true, don't worry, everybody in every company can make a mistake and makes. There are hundreds of these costly historical mistakes available to read and learn on the internet. 

Friday, 30 January 2015

Java Functional Interface

Java's Function and UnaryOperator provided in "java.util.function" package are two examples of functional interfaces and you can use them as a target for lambda expressions or method references, the package contains more interfaces, we will talk about them later. Let us see how they work.

If you had a chance to work with a stream of data, you have noticed what we usually do on the stream is applying some separate procedures to each packet of data or message. For example, processes like:
  • cleaning the message
  • normalizing 
  • looking up the price
  • calculating some index
  • and ...

Java 8's java.util.function package gives us some useful interfaces to facilitate this step by step processing or functional processing. Note that the processes we talked about were all accept one parameter and return one  parameter (lambda calculus like). This is not a limitation because the passed and returned parameter can always be a complex Java Class. 

Wednesday, 28 January 2015

Keep the state with single page application

Loading only one page when you run a single page application (SPA) is not the only benefit of this model for building an application. I think keeping the state of the application in the client's browser is the most important features we can get benefit from it.

Regardless of the way or technology you use to build your web application, if you change or reload the pages, you know you always have a problem of how to pass the state or tracking information from one page to another.

Sample model for SPA
Building SPA with no framework 
Honestly, i hate frameworks, they come and go, sometimes don't have backward compatibility, and many of them don't care about how many people get addicted to and used them. (I know this is life.)

Look at the picture, development of such an application is so easy even without any framework. If you want to do it cleanly, you need to write it in JavaScript OOP like and have the application objects talk to the back end through Ajax request/response.

Writing such an application especially when you want to change UI components or their parameters most of the time is easier than handling them in a fixed HTML tag based application with JavaScript. OK, let us see how we can develop such an application.

Monday, 26 January 2015

Functional Programming, d3.js a good example

If you have ever tried to use d3.js to visualize your data, you already have used some of the features of functional programming. I never forget the difficulty of the first time I tried to understand the d3.js sample codes. In fact, it was d3.js and JavaScript which encouraged me to try to understand the Lambda Calculus itself. Let us take a look at a simple graph, and see how functional programming works here well.

First take a look at the page and its source we are going to talk about.  Here is the d3.js sample graph ...  Just note that the included strange script is nothing but uploaded "d3.v3.min.js" path in google drive.

JavaScript is one of the strangest programming languages you can find, you may have heard of good and bad parts of JavaScript! It lets you use many features of procedural languages as well as some features of event-driven programming, object-oriented and functional programming. d3.js is a single JavaScript function that gives all the visualization support we want to have almost all in functional form.

Friday, 23 January 2015

Java functional programming, Scaling out

In this post I just want to show a single benefit of using functional programming, I'll talk about functional programming in detail later, It does have many benefits and features.

Java 8 supports some of the many functional programming features. Although there are many arguments on the way Java 8 produces binary code for your functional code, but I just wrote a test to see if we can take advantage of functional programming in Java 8 or not.

Defiantly since at the heart of the Java everything is a class, the binary implementation of the functional part will also be somehow class related code, so we have another overhead rather than function calls. I said "function calls", just remind you that when you write a piece of code in a functional style (or lambda calculus) this part of the code will be nothing but calling many functions, and this means pushing parameters, return addresses to the stack, executing code, popping after return and ... This means using memory, many jumps, more CPU instructions and ... which costs much.

Thursday, 22 January 2015

Lambda Calculus, Recursion and loop

Loop or iteration needs mutation of at least one variable in a loop body and of course, Lambda Calculus doesn't let you do it! It means iteration is impossible in functional languages but we can somehow simulate it with using recursion.

Infinite Loop
Let us define a term (abstraction) like the following:

T = λx.x x

What if we try to evaluate the application T T, let us see:

T T    (λx.x x) (λx.x x)    (x x) [x := λx.x x]  →  (λx.x x) (λx.x x)  →  T T

As you see, if we try to get the result of this application we get the application itself, in fact, it is kind of an infinite loop, T T is kind of self-applications which usually lead to recursion. This kind of infinite loop is shown by Ω symbol, so you may say that:

Monday, 19 January 2015

Lambda Calculus, Evaluation Order, Call by name and value

When you try to evaluate an application, you may have to use many reductions and usually, there is more than one strategy to use on how to choose the order of the reductions. For example, if you have more than one instance of application like λx.T1 T2  in an application (T1, T2, T3 are terms) like the following:

(λx.(λy.T1) T2) T3

Then you can have more than one way to reduce the application and find out the result. For a simple example assume:

T1  = λy.y  and  T2 = x 

So the application will be something like:

(λx.(λy.y) x) T3

Now you can either go from inner side to out:

(λx.(λy.y) x) T3  (λx.x) T3  →  T3     (1)

Or you can reduce from outer to inner, like this:

(λx.(λy.y) x) T3  → (λy.y) T3   →  T3    (2)

Causality and Root Cause Analysis

Software programs crash, people make mistakes, accidents or disasters happen and ... We can't always be ready for all of them or prevent all of them but we do can perform analysis to find out why they have happened and prevent them from happening again. One of the approaches to identify the reasons of a problem is Root Cause Analysis or RCA for short.

RCA helps us to find out the underlying cause or causes of the problems, usually those we don't want to happen, but sometimes when we always get bad results and suddenly we get a good result!!! We can use RCA to find the reasons of having such a good result too.

It is one of the best ways of problem solving because it doesn't try to repair or fix the problem, it looks for the root causes, and when you remove them, the problem will never play with you again.

Friday, 16 January 2015

Lambda Calculus, Working with numbers

Lambda calculus is for functional computation so there is nothing in it as numbers but we can somehow have definitions like what we had for True or False and other logical elements, for natural numbers too. The way we are going to introduce the natural numbers is called Church numbers because he was the one who first introduced this in Lambda calculus.

In this definition a higher order function represents a natural number, depending on what number we are showing we need to use n times composition of a function (any function) on itself to show the number n, look at the bellow:

0 :  x   λf.λx.x  // applying 0 times to the given input
1 : f ( x )  λf.λx.f x  // applying 1 times to the given input
2 : f ( f ( x ) )  λf.λx.f ( f x )  // applying 2 times to the given input
3 : f ( f ( f ( x ) ) )  λf.λx.f ( f ( f x ) )  // applying 3 times to the given input
4 : f ( f ( f ( f ( x ) ) ) )  λf.λx.f ( f ( f ( f x ) ) )  // applying 4 times to the given input

n : f ( f ( ... f ( f ( x ) )  ... ) )   λf.λx.f^n x // f^n means applying n times ...

As you see depending on what number we are going to represent, we composite n times the function on a given single argument.

Wednesday, 14 January 2015

Lambda Calculus, Logical operators

We saw that we can use the following abstractions to define True, False and If-Then-Else statement:

True  :  λxy.x
False :  λxy.y
if p then x else y : λpxy.p x y 

We are going to use these three abstractions and the reductions rules we talked about before to show how easily we can drive the logical operators in lambda calculus.

Logical AND
There are many ways we can define abstractions for logical operators, let us start with a sample for logical AND. We can define AND operator as follow:

λx.λy.x y x

To show it is correct abstraction, it must comply the truth table of logical AND operator, so if x equals to True and y equals to False we have:

Monday, 12 January 2015

Lambda Calculus, Boolean and Comparison

True & False
In lambda calculus, True and False are both functions and at first, they may look like a bit strange, look at them:

True  :  λxy.x
False :  λxy.y

Let us see what they mean. A typical inline conditional statement in programming languages usually is something like the following:

Condition ? TrueStatement : FalseStatement

If we use True and False to simply rewrite it, we can have the definition of True and False:

Saturday, 10 January 2015

Try to understand concepts

The reason of sharing this post with you is that for a second i thought that some may say:

"Who needs these lambda calculus stuff to do functional programming!?"

You may not need, especially if you want to be just a good programmer nothing more. Yes, you can do functional programming if you don't know anything about lambda calculus. I think I already told when I was writing about stability patterns that try not to learn the name of the patterns and then use them because you know the names, instead know the reasons why a software gets unstable and then know how you can make it stable, it is much more effective than learning just the name of patterns and then using them as books say.

Richard Feynman one of the American physicist who had very famous lectures on physics
have a saying:

"You can know the name of a bird in all the languages of the world, but when you're finished, you'll know absolutely nothing whatever about the bird. You'll only know about humans in different places, and what they call the bird … I learned very early the difference between knowing the name of something and knowing something."

Friday, 9 January 2015

Lambda Calculus, Conversions and Reduction

There are three methods of reducing or converting a lambda expression. Two methods of conversion and one method of reduction. In some books or papers, conversion rules are also considered as reduction rules. These processes are in fact some rewriting rules to help us to change the appearance of a λ expression in a form we can work with, easier for any reason. A λ expression can't be reduced more is called normal. 

Alpha Conversion
α-conversion is nothing but renaming a bound variable in a λ expression. We all know in mathematics that the function f(x)=2x exactly has the same property as f(y)=2y has, the same idea exists in λ calculus but has a special naming, α-conversion. So both λx.x and λy.y are the equal term, the identity function, we sometimes say that λy.y is the α-equivalent of λx.x. 

It is obvious we can't change x to y in f(x)=xy+x, the same is true in λ calculus. So λz.λy.zy+x is α-equivalent of λx.λy.xy+x but λx.λx.xx+x is not.

Now if you rename a free variable in a λ expression, the process is called substitution, you may need to do some α-conversion before substituting a free variable. This is what we usually do in programming too. When we want to change the name of a variable we need to make sure there is no other variable in its scope with the same name.

Wednesday, 7 January 2015

Lambda Calculus, Currying and higher order function

As we saw in lambda calculus functions can only accept one parameter, the process or techniques we use to convert a function with multiple arguments into the use of sequences of functions each with a single argument is called currying.

Suppose we have f(x,y)=x+y+xy from NxN to N as below, remember we assumed from the last post that we are not going to use prefix notation to have more simple expressions anymore:

λxy.x+y+xy

if we want to evaluate f(4,9) we can first evaluate f for x=2, which returns a function like h(y)=4+y+4y or h(y)=4+5y:

λxy.x+y+xy  λx.λy.x+y+xy → λx.(λy.x+y+xy) 4 → λy.4+y+4y → λy.4+5y

Tuesday, 6 January 2015

Lean Software Development, "Think big, Act small, Fail fast and Learn rapidly"

Lean Software Development is almost a software version of Toyota Production System we talked about in the previous post. A software development paradigm which emphasizes on minimizing wastes and having best efficiency. If we want to describe LSD in a statement the following will be the best:

"Think big, Act small, Fail fast and Learn rapidly"

Like TPS, LSD has some key principles which somehow fulfill the above statement as follows:

1- Eliminate Waste 
Unfortunately for many of us, if you look around your office you'll find some people who almost do nothing or have small positive impact in the project development and life cycle. We all more or less know why these people are there, why managers must/like to keep them and can't/don't want to get rid of them (I'll talk about my experience on this issue soon), but if you really want a lean development and can't fire them, you need to use them somewhere or somehow else, otherwise your are wasting the money.

Monday, 5 January 2015

Toyota Production System

I thought it might be a little boring to continue writing about Lambda Calculus, so decided to talk about one of the Japanese management philosophy which has had a huge influence in software engineering too. It is called TPS or Toyota Production System.

TPS is the process Toyota developed and practiced between 1948 and 1975 to use in its production lines. It is based on the Toyota's philosophy which has two main parts:

  • Continuous Improvement:
    • Establishing a long-term vision
    • Working on challenges
    • Continual innovation
    • Solving the source of the issue or problem
  • Respect for People:
    • Ways of building respect
    • Ways of building better teamwork

Saturday, 3 January 2015

Lambda Calculus, A review and an assumption

Lambda Calculus is a mathematical formal system and the core and foundation of all functional programming languages. If we assume T is a valid expression in λ calculus which we call it term, then the following syntax shows almost the fundamental of this system:

T :=  x   |  c  |   λx.T   |   T T

The above means that every term in λ calculus can be a variable like x or a constant value like can abstraction like λx.T  or an application like T T.

The only thing which is a bit new in this notation is that we never show an abstraction like λx.T which is exactly like something λx.* 2 x, if we just assume T equals * 2 x. We also are not familiar with T T which is a general form of application, we saw examples like (λxy.* x y) 5 10 or (λx.* 2 x) x or (λx.x) λx.x , ... and if you look at them carefully, you'll see these are nothing but T T. We just used parentheses to explicitly show which part of the expression is the abstraction and which part is the term we want to pass to the abstraction, in fact we used (T) T instead of T T.

Thursday, 1 January 2015

Lambda Calculus, Application

Application in mathematics is when you choose some values from a function domain and compute the function result for the given arguments, which will be in the range of course. So simply for function f(x)=2x from N to N, f(2)=4 or f(10)=20 are applications. Or for f(x,y)=xy from NxN to N we have applications like f(1,2)=2 or f(5,10)=50 and ...

In λ calculus, the notation for the above examples are like the followings:
  • f(x)=2x , x=2  =>   (λx.* 2 x) 2  // sometimes like (λx.* 2 x 2) but it is a bit ambiguous especially if we don't now * operates on how many operands so i prefer the first version
  • f(x,y)=xy , x=5, y=10 =>  (λxy.* x y) 5 10 or (λx.(λy.* x y) 10) 5

Note in the first example we don't write λx.* 2 x 2 which means the multiplication operator operates on any number of operands and multiply them which means (4x in this case), so we have to use parentheses to show this is an application otherwise it is totally something else.  So always use open and close parentheses to show an application.