Friday, October 31, 2008

Dtrace Basics

DTrace is a comprehensive dynamic tracing facility that can be used by administrators and developers to examine the behavior of both user programs and of the operating system itself. With DTrace we can explore our system to understand how it works, track down performance problems across many layers of software, or locate the cause of aberrant behavior. It is safe to use on production systems and does not require restarting/recompiling either the system or applications.

We write D scripts which consist of the probe desctiption , predicates and actions to be taken :
probe description
/predicate/
{
actions
}

When we run the D script , we get results based on the probe desciptions ( the actions are executaed based on the predicate filter ). Think of probes as events: a probe fires when the event happens. Let's take a simple D script example,example.d :

syscall::write:entry
/execname == "bash"/
{
printf("bash with pid %d called write system call\n",pid);
}

Here the probe description is syscall::write:entry , which describes the write system call.The predicate is execname == bash , execname is a builtin variable which contains the executable name and here we proceed with the actions only when the string matches.The action statements contain a builtin function print.


Providers/Probes

To list all of the available probes on your system, type the command:
# sudo dtrace -l

It might take some time to display all of the output. To count up all your probes, you can type the command:

#sudo dtrace -l | wc -l
22567

If you look at the output from dtrace -l in your terminal window,each probe has two names , an integer ID and a human-readable name. The human readable name is composed of four parts.When writing out the full human-readable name of a probe, we write all four parts of the name separated by colons like this:

provider:module:function:name

You might note that some fields are left blank.A blank field is a wildcard and matches all of the probes that have matching values in the parts of the name that you do specify.

Now let's look a little deeper. The probe is described using four fields, the provider, module, function, and name.

* provider—Specifies the instrumentation method to be used. For example, the syscall provider is used to monitor system calls while the io provider is used to monitor the disk io.
* module and function—Describes the module and function you want to observe
* name—Typically represents the location in the function. For example, use entry for name to instrument when you enter the function.

Note that wild cards like * and ? can be used. Blank fields are interpreted as wildcards.Below table shows a few examples :

Probe Description Explanation
syscall::open:entry entry into open system call
syscall::open*:entry entry into any system call that starts with open (open and open64)
syscall:::entry entry into any system called
syscall::: all probes published by the system call provider

A predicate can be any D expression.The action is executed only when the predicate evaluates to true.Below table shows some examples :
Predicate Explanation
cpu == 0 true if the probe executes on cpu0
pid == 1029 true if the pid of the process that caused the probe to fire is 1029
execname != "sched" true if the process is not the scheduler (sched)
ppid !=0 && arg0 == 0 true if the parent process id is not 0 and first argument is 0

The action section can contain a series of action commands separated by semi-colons (;).Below table provides some examples :
Action Explanation
printf() print something using C-style printf() command
ustack() print the user level stack
trace print the given variable

Note that predicates and action statements are optional. If the predicate is missing, then the action is always executed. If the action is missing, then the name of the probe which fired is printed.

Below links provide references for different parts of a probe.
List of providers
List of functions
List of aggregating functions
List of variables
List of built-in variables

Examples

pid provider
------------
Example Explanation
pid2439:libc:malloc:entry entry into the malloc()in libc for process id 2439
pid1234:a.out:main:return return from main for process id 1234
pid1234:a.out::entry entry into any func in 1234 that is main exec
pid1234:::entry entry into any function in any library for pid 1234

You can limit the number of probes enabled by modifying the probe description.
Probe Description Explanation
pid$1:libc::entry/div> Limit to only a given library
pid$1:a.out::entry/div> Limit probes to non-library functions
pid$1:libc:printf:entry Limit probes to just one function

Here is the command you can run to print all the functions that process id 1234 calls:
# dtrace -n pid1234:::entry

Modify the script to take the process id as a parameter. Your script will now look like:

#!/usr/sbin/dtrace -s
pid$1:::entry
{}

script to find the stack trace when the program makes the write system call. Note that you need to run this with the -c option.

#!/usr/sbin/dtrace -s
syscall::write:entry
{
@[ustack()]=count();
}

The syscall Provider
--------------------
This is probably the most important provider to learn and use because system calls are the main communication channel between user level applications and the kernel.

To list all the occurrences of the probe when it was fired and give information about the system calls at entry into the system that are performing a close(2) system call, use the following script:

# dtrace -n syscall::close:entry

To start to identify the process which sent a kill(2) signal to a particular process, use the following script:

#!/usr/sbin/dtrace -s
syscall::kill:entry
{
trace(pid);
trace(execname);
}

The proc Provider
-----------------
Trace all the signals sent to all the processes currently running on the system:

#!/usr/sbin/dtrace -wqs
proc:::signal-send
{
printf("%d was sent to %s by ", args[2], args[1]->pr_fname);
system("getent passwd %d | cut -d: -f5", uid);
}

Add the conditional statement (/args[2] == SIGKILL/) into the script and send SIGKILL signals to different processes from different users.

#!/usr/sbin/dtrace -wqs
proc:::signal-send
/args[2] == SIGKILL/
{
printf("SIGKILL was sent to %s by ", args[1]->pr_fname);
system("getent passwd %d | cut -d: -f5", uid);
}

Here you can see the introduction of pr_fname, which is part of the structure of psinfo_t of the receiving process.

References :

Dtrace @ OpenSolaris
Dtrace inventor blogs
Big Admin Page
Dtrace Guide

Thursday, July 3, 2008

gdb equivalent commands on dbx

DBX debugger is found on the Solaris & AIX platforms . Since we have different commands for dbx & gdb , the other most popular debuger , this note is for people who want to see the gdb commands for/on dbx.

DBX dosen't support command completion and abbreviation like gdb . We have other ways to make it work a bit like gdb.dbx does have a gdb mode ( gdb on ),but it lacks some of the gdb commands.Below I try to give the most commonly used commands for the 2 debuggers.For all the commands , the dbx command is on the left of the ":" and the gdb equivalent command on the right of the ":".

Reading Core files

dbx - core : gdb -c core # Reading the core file.
dbx - pid : gdb -p pid # dbx can find the program automatically.

Logging

dbxenv session_log_file_name file : set logging # logging o/p to a file
dbxenv session_log_file_name : show logging

Debugging Information Support

stabs (SUN), dwarf2, -g -O : stabs (GNU), dwarf2, -g -O
Macro support (-g3) : Macro support (-g3) # Macro debugging support

Sun Studio compilers don't generate debug info for macros, though.

Debugging Programs with Multiple Processes

dbxenv follow_fork_mode parent : set follow-fork-mode parent
dbxenv follow_fork_mode child : set follow-fork-mode child
dbxenv follow_fork_mode ask : -

Breakpoints

stop in function : break function
stop at [filename:]linenum : break [filename:]linenum
stopi at address : break *address # Stop at a instruction address
status [n] : info breakpoints [n] # Show all breakpoints
delete [breakpoints] : delete [breakpoints] [range ...]# delete breakpoint
delete all : - # delete a breakpoint

Examining the Stack

where [n] : backtrace [n] # Shows the stack backtrace
frame [n] : frame [args] # goto a particular frame
dump : info locals # dump info about local variables

Examining Data

print -f expr : print /f expr
Array slicing (p array[2..5]) : Artifcial arrays (p *array@len)
display : display
x addr [/nf] : x/nfu addr
regs : info registers
regs -f | -F : info all-registers
print $regname : info registers regname ...

Memory access checking

check -access : set mem inaccessible-by-default [on|off]
check -memuse : set mem inaccessible-by-default [on|off]
check -leaks : set mem inaccessible-by-default [on|off]

Examining the Symbol Table

whereis -a addr : info symbol addr
whatis [-e] arg : whatis arg
whatis [-e] arg : ptype arg
whatis -t [typename] : info types [regexp]
modules -v / files : info sources

Also,its better to set up aliases to commonly used dbx commands, to their gdb quivalents.I am using the below ~/.dbxrc file :
--
dalias alias=dalias

alias b="stop in" # set breakpoint in a function
alias sa="stop at" # set breakpoint at a line number
alias st=status # show breakpoints, numbered
alias del=delete # delete a breakpoint

alias cka="check -access" # check for invalid memory access
alias ckl="check -leaks" # check for memory leaks

alias r="run " # start the program running at its beginning
alias q=quit

alias w=where # show frames in call stack
alias bt=where # show frames in call stack
alias u=up
alias d=down
alias f=frame

alias l=list # list some source lines
alias lw="list -w" # from 5 before current line to 5 after
alias p=print # print value of variable or expression
alias ptype=whatis -t # find declaration of variable or function
alias wi=whatis # find declaration of variable or function

alisa ni=nexti
alias si=stepi
alias n=next # cont to next stmt in same function
alias s=step # step INTO the function about to be called
alias su="step up" # cont to next stmt in parent function
alias c=cont # continue running

alias h=history

Wednesday, April 16, 2008

Beginners AWK programming with examples

AWK derives it name from its creators Aho,Kernighan and Weinberger. Awk has two faces: it is a utility for performing simple text-processing tasks, and it is a programming language for performing complex text-processing tasks.It is also an "interpreted" language -- that is, an Awk program cannot run on its own, it must be executed by the Awk utility itself.

Basic Structire

awk [options] 'pattern action ...' [filenames]

Examples :
awk '/root/' /etc/passwd # root is the pattern here delimited by / & /
awk '{print}' /etc/passwd # prints the whole file

AWK supports multiple pattern action statements ( use shell's multiline capability )

Records and Fields
Each Line is a record.

$0 is the entire record.
$1..$127 are the fields 1 .. 127

Examples :
awk -F: '/root/{print $1}' /etc/passwd # -F specifies the field seperator.
# prints the first field of each entry.

awk -F: '/root/{print $1,$7}' /etc/passwd # prints the 1st and 7th fields
# comma uses OFS which is a space

ls -l | awk '{print $9"\t"$5}'

awk '/^$/ {print "This is a blank line"}
/[a-zA-Z]+/ {print "Alphabets"}
/[0-9]+/ { print "Numerals"}'

What would the output of the below statement ?
awk -F: '/root/{print $ $7}

Arithmatic
Examples :
awk -F: '{print $3,$3+1}' /etc/passwd
awk -F: '{printf("%10s %15s\n",$1,$7)}' /etc/passwd

Note print introduces a newline , but printf dosen't.

Relational Operators ( <,<=,>,>= )
Examples :
awk -F: '$3>500' /etc/passwd
awk -F: '$3==500' /etc/passwd
awk -F: '$3>500 && $3<510' /etc/passwd
awk -F: '$1 == "root" || $1 == "halt"' /etc/passwd

Regular Expression Operators
Regular expressions can also be used in matching expressions.The two operators, `~' and `!~', perform regular expression comparisons. Expressions using these operators can be used as patterns or in if, while, for, and do statements.

Examples :
awk '$1 ~ /^root/' # lines starting with root are printed
awk '$1 !~ /$root/'

Built-In Variables
1. NR ( No. of records processed so far )
NR gives the current line's sequential number.

Examples :
awk '/root/ { print NR,$0}' /etc/passwd # if matches print line no. and line.
awk 'NR>40' /etc/passwd # print from the 41st line
awk 'NR==5 , NR==10 {print NR}' /etc/passwd # print line nos 5 to 10
awk 'NR>5 && NR<10 { print NR}'/etc/passwd # print line no. > 5 and < 10
awk 'NR%2 == 1 { print NR }' /etc/passwd # print odd line numbers

2.FNR
NR counts the lines from the very begining countinuously until the end. FNR restarts the counting at the begining of each input file.

So, for the first file processed they will be equal but on the first line of the second and subsequent files FNR will start from 1 again.

Examples :
awk '{print FNR,$0}' out out1 out2

3.NF ( Contains the no. of fields in the current line/record )
Examples :
awk '{print NF}'
awk 'NF>4' # print lines having > 4 fields

What would the following line output ?
awk '{print $NF}' /etc/passwd

Output Redirection

Examples :
awk '/root { print NR,$0 > "out" }' /etc/passwd # redirects o/p to file named out
ls -l | awk '{print $5 | "sort -rn > sorted" }'
The above calls the sort command and redirects o/o to file sorted.Any external command should always be given in quotes.

ls -l | awk '{print $5 | "sort -nr | uniq "}'
ls -l | awk '{print $5 | "sort -nr | uniq > out"}'

BEGIN & END Blocks

BEGIN and END are special patterns. They are not used to match input records. Rather, they are used for supplying start-up or clean-up information to your awk script. A BEGIN rule is executed, once, before the first input record has been read. An END rule is executed, once, after all the input has been read.An awk program may have multiple BEGIN and/or END rules. They are executed in the order they appear, all the BEGIN rules at start-up and all the END rules at termination.

BEGIN {actions}

-- The body of the AWK script --

END {actions}

Examples :
awk 'BEGIN{FS=":"} { print $1}' /etc/passwd # begin initializes FS to :
awk 'BEGIN{FS=":" ; OFS="+"} {print $1,$7}' /etc/passwd
awk 'BEGIN{FS=":";OFS="+";print "List of users"}{print $1,$7}' /etc/passwd
awk 'BEGIN{print "Welcome"}'
ls -l | awk '{sum=sum+$5} END{print sum}' # sum is accessed as such , not with $.

Built-In AWK Functions

Examples :
awk '{print int($1)}'
awk '{print sqrt($1)}' # square root function
awk '{print length($1}' # length function
awk '{print length}' # prints length of i/p line
awk 'length>60' /etc/passwd
awk 'length>60 { print length,$0}' /etc/passwd

awk print substr($1,3,2)}' # From 3rd char , print 2 chars.
awk '{print substr($1,3,2) > 50'
awk '{print substr($1,3,2) >50 && substr($1,3,2) <60}'

awk '{print toupper($0)}'

Tuesday, April 15, 2008

Perl arrays

An array in perl is an ordered collection of scalar items.While scalar data (single pieces of data) use the $ sign, arrays use the @ symbol in perl.Array indices are whole numbers and the first index is 0.

There are 3 distict characteristics for arrays in perl :

1. Perl supports only single dimentional arrays.
2. Array size cannot be fixed.
3. Collection of data items of any types.

Examples :
@strn=("abc",34,56.7,"hello"); # Declares and initialises an array
print @strn; # Print all the elements

What does the below code fragment do ?
$x=("abc",34,56.7,"hello");

Since we are assigning a list to a scalar,it takes the last value , ie, "hello".

PS : For the difference between arrays and lists , see here.

The syntax that is used to access arrays is closer to arrays in C. In fact, one can often treat Perl's arrays as if they were simply C arrays, but they are actually much more powerful than that.

$, is a global variable and is called the field seperator.By default its not set to anything.Therefore print @string statement above prints all the elements without any spaces.Now , we can use the field seperator variable to our own type of seperator.

Examples :
$,=" ";
print @strn;

What do you think the below code fragment should output ?
$,=":";
print "value of $x is ",$x,"\n" ;

Some more special global variables :
$# Gives the size of the array.
$" Special variable used when printing an array . Default is a space.
$\ Output record seperator.Default is nothing.
$/ Input record seperator.Default is \n.

Below examples show how the size of an array is referenced
$s=@strn; # Assigning the array to a scalar gives the no. of elements of the array.
print @strn>5 ; # In an scaler context we compare with the size of the array.
print scalar @strn; # Explicitly request the size of an array.
print $#strn; # Returns the last index no.

Note we can have a scalar and an array with the same name.
Examples :
$strn=44;
print $strn[0]; # The square bracket differentiates it to be an array.
$strn[100]="rrr"; # Now the array size is 101.All the uninitialized values are 0.
# Array elements beyond the array size is undef/null.
$#strn=5; # Reduces the size of the array.
print @strn[0,4,2]; # prints 0th , 4th and 2nd element.

.. is the range operator.Range should always be positive.
Examples :
@strn[11..15]=(45,6,7,8,9); # truncates any additional values given.
print $strn[-1]; # -1 is the last index no.
print $strn[-2]; # -2 is second last index and so on.

Build-In Array functions :

1. Push ( push array,list of elements )
Push 1 or more elements.Push returns size of the new array.

Examples :
@n=qw(a b c d e f); # qw stands for quote words.
push @n,"56",33,"aa";
print push @n,"ui","ll"; # prints the size of the new array.
print push @n; # returns the size of array.
print @n;

@n=("hello","world");
is the same as
@n=qw(hello world);

2. Pop ( pop arrayname )
Removes the last element of an array and decrease the size of array.
Returns the element removed.

Examples :
$\="\n";
print @ARGV;
pop; # pop looks into @ARGV & removes the last element.
print @ARGV;

3. Unshift ( unshift arrayname,list of elements )
Adds the elements at the begining of array ( the opposite of push )

Examples :
unshift @n,"first",second";
print @n;

4. Shift ( shift arrayname )
Same as pop , but removes the first element.

Examples :
my @numbers = (1 .. 10);
while(scalar(@numbers) > 0)
{
my $i = shift(@numbers);
print $i, "\n";
}

5. Splice ( splice arr,startindex,no. of elem to be removed,list of elem to add )
Overwrite/Append anywhere in an array.

Examples :
@cities=("bang","hyd","mum","chn");
splice @cities,2,1,"mys";
print "@cities";

splice @cities,0,0,"mum","sri","bhu"; # appends at the begining.
print "@cities";

splice @cities,1,2; # remove 2 elements begining at index 1.Index starts at 0.
splice @cities,3; # removes all the elements starting from the 3rd index.
splice @cities; # deletes all the elements.


6. Sort ( sort arrayname )
Sort the array elements by ASCII ascending order ( default ).This dosen't modify the array,returns a new sorted array.By default the array elements are compared with the string comparision operator.

Examples :
$,=" ";
@cities=("bang","hyd","mum");
print sort @cities; # prints a ascii sorted list with a space inbetween.
print @cities;

@cities=sort @cities; # overwrites the array with the sorted array
print @cities;

Below examples show how to do numeric comparisions.
Examples :
@nn=(45,67,1,11,20,30);
print sort @nn; # o/p 1,11,20,30,45,67 ( ascii sort ).
print sort{$a <=> $b} @nn; # ascending order . Remember this construct.
print @nn;
print sort{$b <=> $a} @nn; # descending order. Remember this construct.
print sort{$b cmp $a} @cities; # string (ascii)comparision in descending order.

7. Reverse ( reverse arrayname )
Reverse the array elements.Dosen't modify the array.

Examples :
print reverse @cities; # prints reverse.
print reverse sort @cities; # descending order.

8. split ( split ,string )
Returns an array splitting on a character or string.

Examples :
$s="Hello:world::perl";
@arr2=split(m/:+/,$s); # m stands for match.
# The contents between / / is the regex pattern
# $s is the string to be searched.

9. Join ( Join char/string,string )
Its the opposite of split.Returns a string.

Examples :
$st=join "-",@cities;
print $st;
print join "\n",@cities;

10. Delete ( delete array )
Deletes any element of an array.Deleting an element other than the last element of the array dosen't change the size of the array , else it changes.

Examples :
delete $cities[1];
print "@cities";
print scalar @cities;
delete $cities[$#cities];
print scalar @cities;
print "@cities";

Tuesday, March 11, 2008

Running external commands in Perl

Perl enables you to call external command line utilities.
Whenever possible, avoid calling external commands
* Perl supports a large number of built in functions
* External commands are generally not portable
* Often more time consuming (process setup/teardown overhead)

There are several methods to execute external commands

* The open() function
* The system() function
* Back-quotes
* The fork() & exec() functions

All of these methods have different behaviour, so you should choose which one to use depending of your particular need. In brief, these are the recommendations:

system() : You want to execute a command and don't want to capture its output
exec : You don't want to return to the calling perl script
backticks : You want to capture the output of the command
open : You want to pipe the command (as input or output) to your script

The native shell is used to execute the command line.

Using open()

Use open() when you want to:

- capture the data of a command (syntax: open("command |"))

- feed an external command with data generated from the Perl script (syntax: open("| command"))

Examples :

* Read the output from one or more commands

open( README, "ls -l |" );
$line = ;

* Write to the input of one or more commands

open( WRITEME, "| Mail -s 'test' joe@foo.com" );
print WRITEME "Dear John,\n";

#-- list the processes running on your system
open(PS,"ps -e -o pid,stime,args |") || die "Failed: $!\n";
while ( )
{
#-- do something here
}

#-- send an email to user@localhost
open(MAIL, "| /bin/mailx -s test user\@localhost ") || die "mailx failed: $!\n";
print MAIL "This is a test message";

Using system()

system() executes the command specified. It doesn't capture the output of the command.

system() accepts as argument either a scalar or an array. If the argument is a scalar, system() uses a shell to execute the command ("/bin/sh -c command"); if the argument is an array it executes the command directly, considering the first element of the array as the command name and the remaining array elements as arguments to the command to be executed.

For that reason, it's highly recommended for efficiency and safety reasons (specially if you're running a cgi script) that you use an array to pass arguments to system().

Examples :

#-- calling 'command' with arguments
system("command arg1 arg2 arg3");

#-- better way of calling the same command
system("command", "arg1", "arg2", "arg3");

The return value is set in $?; this value is the exit status of the command as returned by the 'wait' call; to get the real exit status of the command you have to shift right by 8 the value of $? ($? >> 8).

If the value of $? is -1, then the command failed to execute, in that case you may check the value of $! for the reason of the failure.

system("command", "arg1");
if ( $? == -1 )
{
print "command failed: $!\n";
}
else
{
printf "command exited with value %d", $? >> 8;
}

# The return value is the integer value returned by the shell

$err = system( "ls -l | more" );

# Here the more command can be used becuase the new shell inherits STDIN, STDOUT, and STDERR.

Using backticks (``)

In this case the command to be executed is surrounded by backticks. The command is executed and the output of the command is returned to the calling script.

In scalar context it returns a single (possibly multiline) string, in list context it returns a list of lines or an empty list if the command failed.

The exit status of the executed command is stored in $? (see system() above for details).

Examples :

#-- scalar context
$result = `command arg1 arg2`;

#-- the same command in list context
@result = `command arg2 arg2`;

Using exec()

The exec() function executes the command specified and never returns to the calling program, except in the case of failure because the specified command does not exist AND the exec argument is an array.

Like in system(), is recommended to pass the arguments of the functions as an array.

PATH Environment Variable
All methods for executing external commands use the $ENV{PATH} environment value to locate "unqualified" commands.Unqualified commands have no explicit full path specification.The $ENV{PATH} environment value is initialized from your user environment when the Perl interpreter starts.The structure of the $ENV{PATH} environment value is a colon-separated list of search paths.

Sunday, February 3, 2008

Programming Methodology & Algorithms

Programming from Specifications presents a rigorous treatment of most elementary program-development constructs, including iteration, recursion, procedures, parameters, modules and data refinement.

The below link provides more details about the programming methodology :
http://web.comlab.ox.ac.uk/oucl/publications/books/PfS/

Some good algorithm related sites :
http://www2.toki.or.id/book/AlgDesignManual/BOOK/BOOK/BOOK.HTM
http://www.cs.sunysb.edu/~algorith/
http://www.csse.monash.edu.au/~lloyd/tildeAlgDS/

Thursday, January 31, 2008

Standard C library

Both Unix and C were created at AT&T's Bell Laboratories in the late 1960s and early 1970s.The C programming language, before it was standardized , did not provide built-in functionalities such as I/O operations . By the beginning of the 1980s compatibility problems between the various C implementations became apparent.

In 1983 the American National Standards Institute (ANSI) formed a committee to establish a standard specification of C known as "ANSI C" . Over time, user communities of C,shared ideas and implementations of what is now called C standard libraries to provide that functionality.

Since the standardisation of the C library , applications written strictly within the bounds mentioned in the standard can vouch to be portable across different platform implementations.

There are lot of online resources which can act as a reference for the standard C libarary :
http://www-ccs.ucsd.edu/c/
http://www.acm.uiuc.edu/webmonkeys/book/c_guide/
http://www.freshsources.com/1995002A.HTM

Books on C :
http://publications.gbdirect.co.uk/c_book/

If you would like to learn and understand the C libarary , you should grab yourself a copy of the The Standard C Library by P. J. Plauger . Dr. Plauger has impeccable qualifications for writing this book - he was secretary of the ANSI C committee.

Wednesday, January 30, 2008

Jargon File

Jargon Defination
Language used in a certain profession or by a particular group of people. Jargon is usually technical or abbreviated and difficult for people not in the profession to understand.

But it seems,within the software industry , the jargon's generation rate is far too higher to keep pace with it.People use Jargons to impress upon their peers/managers.Its better to learn , than to be left out or be embarresed to be asking someone for their meanings.I found one good link which provides a good vocabulary of computer jargons and might prove helpful in the long term.

http://catb.org/~esr/jargon/html/index.html

Free E-Book resources

Around 38 percent of Indian Internet users (14 million) spent an average of 8 hours per week online ,as found in the last CNET survey . Due to the same reason,we see a proliferation of sites providing books in Electronic form (E-Book) .
I have used the below sites and feel they have a decent collection of books on varied subjects and might interest people from varied backgrounds ( though it might appeal more to people from computer science background ).

Some of the links are :
http://en.wikibooks.org/wiki/Main_Page
http://freecomputerbooks.com/
http://homepage.mac.com/kaotech/Free_Books.html
http://www.freetechbooks.com/
http://www.gutenberg.org/wiki/Main_Page

There may be more and better sites . If anyone comes across , I would be happy if you too let me know about them.

Happy Reading

Tuesday, January 29, 2008

Open Source Education : Knowledge is there to be shared

The most practical thing to go open source according to me has been education.It started through the MIT's OpenCourseWare initiative , which plans to put all of its educational materials for its under-graduate and graduate level courses freely available on the net for anyone.

I think this provides an advantage to interested students/professionals to make most of the missed out opportunity of getting into a course at MIT.Many top universities seem to be following the MIT way and are providing their courses online.The world won't be divided now based on the quality of education with these initiatives.

Top 10 universities providing online courses
http://www.jimmyr.com/blog/1_Top_10_Universities_With_Free_Courses_Online.php

MIT Open CourseWare
ocw.mit.edu

Stanford Open CourseWare
http://stanfordocw.org/

CMU Open Learning Initiative
http://www.cmu.edu/oli/

Also,to make it more convinient to search and find courses,this site would be handy :
http://ocwfinder.com/

Other helpful sites
http://www.ocwconsortium.org/
http://www.opencourse.info/
http://education.jimmyr.com/
http://www.jimmyr.com/blog/Online_Education_Free_201_2006.php

Monday, January 28, 2008

Tool Tips : Code search

Its always a good way to learn a language by reading good code written by expert programmers.Rather than searching the entire web for code snippets , we have some popular code search engines that give instant and relevent details .

Some of the admired code search tools are :

www.csourcesearch.net

http://www.google.com/codesearch

http://www.koders.com/

Tool Tips : Internet Encyclopedia

Welcome! The Internet Encyclopedia is my attempt to take the Internet tradition of open, free protocol specifications, merge it with a 1990s Web presentation, and produce a readable and useful reference to the technical operation of the Internet.

http://www.freesoft.org/CIE/index.htm

Tools Tips - Lightweight PDF reader

The Adobe Acrobat reader seems to be getting heavier day by day.For people like me , who just need the basic reader , I have found out a lightweight and cool PDF reader named foxit .

Its small in size , takes little time to launch and had a rich feature set . Its core function is compatible with PDF Standard 1.7 . You can download it and experience it for yourself :

http://www.foxitsoftware.com/downloads/

As for me , I am simply enjoying it , and have removed the bulky Acrobat components from my system.Even though the CPU power and memory requirements are increasing day by day , and software vendors are making most of it and indirectly forcing people to upgrade their systems , we have innovative products like these that makes the switch a lil too late ...