Monday, February 14, 2011

Oracle Database 10.2.0.4 on Mac OS X Snow Leopard

Its been sometime now since both the 10.2.0.4 Database Client (+ Instant Client) and Database server have been available for Apple Mac OS X x86-64 . The lowest version supported was the Mac OS X Leopard 10.5.4 .

By the time Oracle released the Oracle DB 10.2.0.4 , Snow Leopard was already out and adopted by the Mac community . If you happen to be on a SL box and trying to install Oracle , there is a know problem with the database crashing while installation . Many notable Oracle/Mac folks have worked through and found alternate solutions .

Check out the below links for information :

Link 1
Link2

I will try to list out the issues and workarounds for getting Oracle Databse 10.2.0.4 installed . Some of the points may have been already mentioned in the links mentioned earlier .

1. JDK version on Snow Leopard

JDK 1.6 ( 64 bit ) is the default JDK version available on the platform . If we can install JDK 1.4.2 ( 32-bit ) on Snow Leopard , we will *NOT* require many of the other workarounds related to JDK version mentioned below .

Searching on the web , I found this link which says how to use the older Leopard version 1.4.2 ( & 1.5 ) JDK on Snow Leopard . *NOT* sure if its officially supported , I haven't tried it out myself , but this is a suggestion if someone wants to give it a try .

http://tedwise.com/2009/09/25/using-java-1-5-and-java-1-4-on-snow-leopard/

2. Working around with the JDK 1.6 ( 64 bit ) on Snow Leopard

The next workaround is to configure the Snow Leopard system to use the default JDK 6 , but in 32-bit mode , since there is still not much support for 64 bit java ( some Oracle install related JNI libs are 32-bit ) . JDK 1.6 is pretty recent and there hasn't been much adoption though its slowly increasing .

This is *NOT* tested , but I assume this should also work out . Open "/Applications/Utilities/Java Preferences.app" and set JDK 6 32-bit by default .

The next workaround , is to make a symlink for JDK 1.4.2 point to the default JDK 1.6 . But since its 64 bit by default we need to do changes to some of Oracle's installation scripts .

i) We first need to invoke the runInstaller ( which install's Oracle ) with the "=J-d32" option so that it invokes Java in 32-bit mode , since in the default mode Java 6 is invoked in 64-bit mode .

ii) We also need to modify the $ORACLE_HOME/jdk/bin/java script and pass the "-d32" flag in the command-line . $ORACLE_HOME/jdk/bin/java is used to invoke all the different Oracle database configuration assistants ( netca, dbca etc ) .

3. Make Errors

During the install , we can still see the below issue :
Error in invoking target 'all_no_orcl ipc_g ihsodbc32' of makefile
?/rdbms/lib/ins_rdbms.mk

Workaround : Comment out the HSODBC_LINKLINE in ins_rdbms.mk . Maybe ignoring should also work .

4. Configuration Assistants failures
We should fix the $ORACLE_HOME/jdk/bin/java script to invoke Java in 32-bit mode so that we don't see any failures while running the NETwork Configuration Assistant ( NETCA ) , DataBase Configuration Assistant ( DBCA ) etc ..

5. EE/SE install's with DB creation Snow Leopard

Any of the EE/SE installs on Snow Leopard have a known problem . They fail with the below error message while running the DBCA :

ORA-03113: end-of-file on communication channel appeared.
( dbca : ORA-3113 with CloneRmanRestore )

Looking through the logs , we can see the below error messages :
ORA-07445: exception encountered: core dump [joxnfy_()+2763] [SIGSEGV] [Address not mapped to object] [0x277B8AEB8] [] []

This happens when dbca is invoked to create the database in the EE/SE installs ( not in the software only installs ) . The workarounds suggested on the previously mentioned links is to copy the old Oracle binary to overwrite the new binary .

During Oracle Installation , the Oracle binary is relinked/regenerated on the platform . Even though there shouldn't have been anything which would prevent Oracle to run smoothly on Snow Leopard , unfortunately we see the above issue when the Oracle binary is recompiled on Snow Leopard ( with the rest of the libraries oracle uses being the same old ones).

There seems to have been some changes that have gone into the Snow Leopard . Even though there were no major changes , as the naming convention of Apple would suggest ( Leopard -> Snow Leopard ) , there are still some subtle differences .

One of the changes which is linked to the Oracle Installation crash , most likely seems to be related to changes as described below ( found from a wiki ) :

With the introduction of Apple's Mac OS X 10.6 platform the Mach-O file has undergone a significant modification that causes binaries compiled on a 10.6 computer to be by default only able to run on a 10.6 computer. The difference stems from load commands that Mac OS X's linker (dyld) can not understand on previous Mac OS X versions. Another significant change to the Mach-O format is the change in how the Link Edit tables (found in the __LINKEDIT section) function. In 10.6 these new Link Edit tables are compressed by removing unused and unneeded bits of information, however Mac OS X 10.5 and earlier cannot read this new Link Edit table format. To resolve this issue, the linker flag "-mmacosx-version-min=" is heavily used and depended on. Apple, current maintainer of the Mach-O format, recommends that all developers now use this flag along with the appropriate SDK headers when creating an application/binary.

Below is the dyld ( Dynamic Loader ) release notes of Mac OS X 10.6 ( Snow Leopard )
DYLD release notes for Mac OS X 10.6 ( Snow Leopard )

Researching based on the above data , made me stumble upon a couple of new compiler/linker flags which can help control the generation of the __LINKEDIT section format ( traditional relocation format or the new compressed format ) .

-mmacosx-version-min=version ( Apple GCC Flag )
The earliest version of MacOS X that this executable will run on is version. Typical values of version include 10.1, 10.2, and 10.3.9 .

-macosx_version_min version ( LD Flag )
This is set to indicate the oldest Mac OS X version that that the output is to be used on. Specifying a later version enables the linker to assumes features of that OS in the output file. The format of version is a Mac OS X version number such as 10.4 or 10.5.

Please note the difference between hyphen ("-") vs the underscore ("_") and also the names of the flags for GCC and LD respectively .

-no_compact_linkedit ( LD Flag )
Normally when targeting Mac OS X 10.6, the linker will generate compact information in the __LINKEDIT segment. This option causes the linker to instead produce traditional relocation information.

The "-no_compact_linkedit" linker flag is used / makes sense / allowed only in conjunction with the -mmacosx-version-min=version ( Apple GCC Flag ). You can also use the -macosx_version_min version ( LD Flag ) instead of the GCC flag if you want.

Now , passing the -mmacosx-version-min=version ( Apple GCC Flag ) & "-no_compact_linkedit" linker flag to the Oracle link-line generates a Oracle which is compatible with the older format and also works fine on the Snow Leopard version.

Below is the modification we need to make , to change the Oracle Link-link manually in the env_rdbms.mk Makefile :

$ diff env_rdbms.mk.new env_rdbms.mk.orig
< ORACLE_LINKER=gcc -flat_namespace -mmacosx-version-min=10.5 -Wl,-no_compact_linkedit $(OLAPPRELINKOPTS) $(LDFLAGS) $(COMPSOBJS)
---
> ORACLE_LINKER=gcc -flat_namespace $(OLAPPRELINKOPTS) $(LDFLAGS) $(COMPSOBJS)

With this generated binary , DBCA should proceed through cleanly and so also the Oracle Database 10.2.0.4 install on the Mac OS X Snow Leopard.

HP-UX software ( patch ) Information and query

Software and patch management in HP-UX is done using HP Software Distributor called SD-UX. The software in SD-UX is organized in a hierarchy of components or objects. These components are filesets, subproducts, products and bundles. The place where these components are stored is called a software depot.

Few Common Terms in SD-UX :

Filesets: It is a collection of files and some control scripts. It is the basic entity in the SD-UX hierarchy. One file set can belong to only on product. But it can be included in a number of sub-products and bundles.

Here is an example: Keyshell.KEYS-END-A-MAN B.11.30

where
1st field is the fileset name
2nd field is the fileset version

Sub-products: If a file set contains several filesets, it is better to combine logically related filesets in to subproducts.

Here is an example:
X11.MessagesByLang X11 Localized Messages

This sub-product contains the filesets for X11 messages in several languages.

Product:
It is nothing but a set of filesets. In another words, it is a superset of filesets / subproducts.

Here is an example ,
X11 B.11.30 HP-UX X Windows Software

where
X11 is the product name
B.11.30 is the product version
third field is the product description

Bundles: Bundles are usually packaged by HP-UX for the distribution of software. The bundle may contain filesets that may belong to different products.

Here is an example,
OnlineDiag B.11.20.06 HP-UX 11.0 Support Tools Bundle

Patch Commands
swconfig - configure software / patches
swlist - display software / patch information
swinstall - installs software / patches
swremove - removes software / patches

Patch Logs / Listing
/var/adm/sw/swagent.log --> contains entries from swagent daemon
/var/adm/sw/swinstall.log --> entries/errors from swinstall

Gather Patch Install/State data
swlist -l fileset -a state PH* > /tmp/swlist.txt
swlist product PH* > /tmp/swlist.txt ( $ swlist -l product 'PH??_*' )

Other patch Commands
check_patches --> Checks for common problems and issues ie. Patch attributes, missing patch filesets, etc.
/usr/contrib/bin/show_patches ---> displays only active patches on system

How to tell what patches are loaded using the swlist command (10.x)?
Patches are named PHxx_nnnn, where xx can be KL, NE, CO, or SS.
nnnn refers to the patch number, which is always unique no matter what PHxx category is specified.

If a patch has been loaded on a 10.x system, the patch should be listed in the output of swlist . All patches named PHKL*, and some patches named PHNE*, are kernel patches.

A patch name consists of the characters "PH" (Patch HP-UX), followed by a two-character type-identifier, followed by an underscore, followed by a four or five-digit number.

The currently defined patch types are:

CO - COmmands & libraries
KL - KerneL
NE - NEtworking
SS - SubSystems

Kernel patches always require a system reboot, so that the newly updated kernel can be loaded. Many Networking patches (PHNE*) also make modifications to the kernel, and hence require a reboot.

Note that the numerical portion of any given patch name is unique, among ALL patches. So there would never be a patch named "PHCO_23507", in addition to a patch named "PHKL_23507." This lends itself nicely to grepping for a particular patch (ie - to see if "PHKL_23507" is installed, one could use "swlist -l product | grep 23507" .

QPK = Quality Pack - which is a bundle of patches that HP provides twice yearly for each version of HP-UX.

HP-UX 11i v2
# swlist -l bundle BUNDLE11i HWEnable11i FEATURE11i QPKBASE QPKAPPS

HP-UX 11i v3
# swlist -l bundle BUNDLE11i HWEnable11i FEATURE11i QPKBASE

Contrast the installed patches against the latest available on www.itrc.hp.com

Examples :
To list the installed bundles :
# swlist -l bundle

To list the installed products :
# swlist -l product

To list the installed subproducts :

# swlist -l subproduct

To list the installed filesets alone :
# swlist -l fileset
# swlist -l fileset openssl

To list all the files belonging to the product X11 or fileset openssl :
# swlist -l file X11
# swlist -l file openssl

To open swlist in GUI mode :
# swlist -i

To view the readme file for a product :
# swlist -a readme OS-Core

To find out Which Operating Environment Currently installed
# swlist -l bundle | grep HPUX11i

To generate a comprehensive listing that includes all filesets for the product NETWORKING
# swlist -v -l fileset NETWORKING

To find out which product a file belongs to 1st way, its slow:
# swlist -l file |grep /bin/ls

2nd way much quicker, as root:
# find /var/adm/sw/products -name INFO -exec grep -l /bin/ls {} +

Thursday, July 9, 2009

Determining if your kernel and hardware is 32bit or 64bit on Unix environments

HP UNIX

This technote explains how to establish if an HP-UX® 11.x kernel is 32-bit or 64-bit capable.

Run getconf KERNEL_BITS on the system in question. The output, either "32" or "64", corresponds to 32-bit or 64-bit kernels, respectively.

# getconf KERNEL_BITS
64

Check the vmunix file for the following entries:

# file /stand/vmunix
/stand/vmunix: PA-RISC1.1 executable ---> 32-bit
/stand/vmunix: ELF-64 executable object file ---> 64-bit

This will tell you if your currently running kernel is 64 bits or 32 bits.It returns the number of bits used by the kernel for pointer and long data types.

# getconf KERNEL_BITS
64

Returns which kernel is supported on the hardware.

# getconf HW_32_64_CAPABLE
1

This will show you if the CPU’s are capable of running 32, 64, or 32/64 bit kernels.

# getconf HW_CPU_SUPP_BITS
64

SOLARIS

The easiest way to determine which version is running on your system is to use the isainfo command. This new command prints information about the application environments supported on the system.

The following is an example of the isainfo command executed on an UltraSPARC™ system running the 64-bit operating system:

% isainfo -v
64-bit sparcv9 applications
32-bit sparc applications

One useful option of the isainfo(1) command is the -n option, which prints the native instruction set of the running platform:

% isainfo -n
sparcv9

The -b option prints the number of bits in the address space ( cpu’s bit size capabilities ) of the corresponding native applications environment :

% isainfo -b
64

% echo "Welcome to "`isainfo -b`"-bit Solaris"
Welcome to 64-bit Solaris

A related command, isalist(1), that is more suited for use in shell scripts, can be used to print the complete list of supported instruction sets on the platform. Some of the instruction set architectures listed by isalist are highly platform specific, while isainfo(1) describes only the attributes of the most portable application environments on the system. Both commands are built on the SI_ISALIST suboption of the sysinfo(2) system call. See isalist(5) for further details.

The following is an example of the isalist command executed on an UltraSPARC system running the 64-bit operating system:

% isalist
sparcv9+vis sparcv9 sparcv8plus+vis sparcv8plus sparcv8
sparcv8-fsmuld sparcv7 sparc


AIX

For AIX, we will use the bootinfo command . The below command's shows if the hardware is 32 bit or 64 capable.

# bootinfo -y
64

# getconf HARDWARE_BITMODE
64

# prtconf -c
CPU Type: 64-bit

Below commands show the running kernel’s bit size :

# bootinfo -K
64

# prtconf -k
Kernel Type: 64-bit

# getconf KERNEL_BITMODE
64

LINUX

For linux, we will look at the cpuinfo from /proc. Here, we are mainly interested in the “flags” for the CPU’s:

# cat /proc/cpuinfo | grep -i flags
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good pni dtes64 monitor ds_cpl est tm2 ssse3 cx16 xtpr pdcm sse4_1 lahf_lm

We are interested in three values in the output, as they indicate the bit size capabilities of the CPU:

16 Bit = rm (Real Mode)
32 Bit = tm (Transparent Mode)
64 Bit = lm (Long Mode)

This doesn’t necessarily mean the Motherboard is capable of 64 bit.

To determine the bit size of your running kernel, you can also use getconf, similar to HPUX, to find this info:

# getconf LONG_BIT
64

This shows that my kernel is running 64 bit.

Tuesday, April 14, 2009

Create shared library on Unix

We will see how we can build dynamic libraries on different Unix flavours.

Apple Mac OS X

$ gcc -arch x86_64 -fno-common -c source.c
$ gcc -arch x86_64 -fno-common -c code.c
$ gcc -dynamiclib -flat_namespace -undefined suppress -install_name /usr/local/lib/libfoo.2.dylib
-o libfoo.2.4.5.dylib source.o code.o

-dynamiclib
When passed this option, GCC will produce a dynamic library instead of an executable when linking,
using the Darwin libtool command.

-arch arch
Compile for the specified target architecture arch. The allowable values are i386,x86_64,ppc and ppc64.

-flat_namespace
Use a single level address space for name resolution and done for al Unixes.

-undefined suppress
Supress undefined symbols.It will get resolved later from dependent libraries.

GNU Linux

gcc -m64 -fPIC -g -c -Wall a.c
gcc -m64 -fPIC -g -c -Wall b.c
gcc -m64 -shared -Wl,-soname,libmystuff.so.1 -o libmystuff.so.1.0.1 a.o b.o -lc

-fpic/-fPIC
Generate position-independent code ( PIC ) suitable for use in a shared library.

-shared
Produce a shared object which can then be linked with other objects to form an executable.

-m32/-m64
Generate code for 32-bit or 64-bit environments

HP HP-UX

cc +DD64 -Aa -c +Z length.c volume.c mass.c ( 64-bit )
ld -b -o libunits.sl length.o volume.o mass.o

-Amode
Specify the compilation standard to be used by the compiler.
a
Compile under ANSI mode

+z,+Z
Both of these options cause the compiler to generate position independent code (PIC) in 32/64-bit respectively.

+DD64
Recommended option for compiling in 64-bit mode on either Itanium-based or PA-RISC 2.0 architecture. The macros __LP64__ and (on PA platforms) _PA_RISC2_0 are #defined.

+DD32
Compiles in 32-bit mode and on PA systems creates code compatible with PA-RISC 1.1 architectures. (Same as +DA1.1 and +DAportable.)

+DA2.0W
Compiles in 64-bit mode for the PA-RISC 2.0 architecture. The macros __LP64__ and _PA_RISC2_0 are #defined.

+DA2.0N
Compiles in 32-bit mode (narrow mode) for the PA-RISC 2.0 architecture. The macro _PA_RISC2_0 is #defined. +DA options are not supported on Itanium-based platforms.

SUN SOLARIS

cc -xarch=v9 -Kpic -c a.c
cc -xarch=v9 -Kpic -c b.c
ld -G -o outputfile.so a.o b.o

-Kpic/-KPIC
Generate position-independent code for use in shared libs.

-G
Produce a shared object rather than a dynamically linked executable.

-xarch=v9
Specifies compiling for a 64-bit Solaris OS on SPARC platform.

-xarch=amd64
Specifies compilation for the 64-bit AMD instruction set.The C compiler from studio 10 onwards predefines __amd64 and __x86_64 when you specify -xarch=amd64.

Links:
Using static and shared libraries across platforms

Shared Libraries (HP-UX)

Thursday, January 15, 2009

FD Passing with Unix Domain Sockets

Unix domain sockets are two-way local inter-process communication mechanism through the socket interfaces.The protocol family is AF_UNIX/AF_LOCAL/PF_UNIX/PF_LOCAL.It supports both SOCK_STREAM & SOCK_DATA mode of communication.

SOCK_STREAM unix domain sockets can also be used to pass ancillary/control information including the passing of open file descriptors from one process to another.Any valid descriptor can be passed.File descriptors are transferred between separate processes across a UNIX domain socket using the sendmsg() and recvmsg() functions.Both of these system calls pass a struct msghdr to minimize the number of directly supplied arguments.

The structure hs the below form :
struct msghdr {
void *msg_name; /* optional address */
socklen_t msg_namelen; /* size of address */
struct iovec *msg_iov; /* scatter/gather array */
int msg_iovlen; /* # elements in msg_iov */
void *msg_control; /* ancillary data, see below */
socklen_t msg_controllen; /* ancillary data buffer len */
int msg_flags; /* flags on received message */
};

msg_name -> destination address ( specified for un-connected sockets )
msg_namelen -> length of the address specified in msg_name

msg_iov -> scatter/gather buffer address
msg_iovlen -> Number of scatter/gather ( struct iov ) elements specified

msg_control -> pointer to ancillary/control header & data
msg_controllen -> total length of the control header & data's.

msg_flags -> flags on received message

The control message header declared as below :
struct cmsghdr {
u_int cmsg_len; /* data byte count, including hdr */
int cmsg_level; /* originating protocol */
int cmsg_type; /* protocol-specific type */
/* followed by u_char cmsg_data[]; */
};

cmsg_len -> No. of bytes ( header + data )
cmsg_level -> Originating protocol
cmsg_type -> Protocol specific type

As shown in this definition, normally there is no member with the name cmsg_data[]. Instead, the data portion is accessed using the CMSG_xxx() macros, as described shortly.Nevertheless, it is common to refer to the cmsg_data[] member.

When ancillary data is sent or received, any number of ancillary data objects can be specified by the msg_control and msg_controllen members of the msghdr structure, because each object is preceded by a cmsghdr structure defining the object's length (the cmsg_len member).

CMSG_LEN
unsigned int CMSG_LEN(unsigned int length);

Given the length of an ancillary data object, CMSG_LEN() returns the value to store in the cmsg_len member of the cmsghdr structure, taking into account any padding
needed to satisfy alignment requirements.

One possible implementation could be:
#define CMSG_LEN(length) ( ALIGN(sizeof(struct cmsghdr)) + length )

CMSG_SPACE
unsigned int CMSG_SPACE(unsigned int length);

Given the length of an ancillary data object, CMSG_SPACE() returns the space required by the object and its cmsghdr structure, including any padding needed to satisfy alignment requirements.This macro can be used, for example, to allocate space dynamically for the ancillary data.This macro should not be used to initialize the cmsg_len member of a cmsghdr structure,instead use the CMSG_LEN() macro.

One possible implementation could be:
#define CMSG_SPACE(length) ( ALIGN(sizeof(struct cmsghdr)) + \
ALIGN(length) )

Note the difference between CMSG_SPACE() and CMSG_LEN(), shown also in the figure in Section 4.2: the former accounts for any required padding at the end of the ancillary data object and the latter is the actual length to store in the cmsg_len member of the ancillary data object.

CMSG_FIRSTHDR
struct cmsghdr *CMSG_FIRSTHDR(const struct msghdr *mhdr);

CMSG_FIRSTHDR() returns a pointer to the first cmsghdr structure in the msghdr structure pointed to by mhdr.The macro returns NULL if there is no ancillary data pointed to the by msghdr structure (that is, if either msg_control is NULL or if msg_controllen is less than the size of a cmsghdr structure).

We provide a server and client source examples to show how descriptor passing works.

server.c

#define UDS "domain_socket"

int send_connection(int fd,int sockfd)
{
struct msghdr msg; /* message header */
struct iovec iov; /* scatter/gather buffer */
char b='b';
int rc;
/* Control Message header */
union
{
struct cmsghdr cm; /* For alignment */
char control[CMSG_SPACE(sizeof(int))];
} control_un;
struct cmsghdr *cmptr;

msg.msg_control = control_un.control;
msg.msg_controllen = sizeof(control_un.control);

/* Populate the control info */
cmptr = CMSG_FIRSTHDR(&msg);
cmptr->cmsg_len = CMSG_LEN(sizeof (int));
cmptr->cmsg_type = SCM_RIGHTS;
cmptr->cmsg_level = SOL_SOCKET;
*((int *) CMSG_DATA(cmptr)) = fd; /* fd being passed here */

msg.msg_name = (caddr_t) NULL;
msg.msg_namelen = 0;

iov.iov_base = &b;
iov.iov_len = 1;
msg.msg_iov = &iov;
msg.msg_iovlen = 1;

msg.msg_flags = 0;

rc = sendmsg(sockfd,&msg,0);
if(rc == -1 ){
perror("sendmsg");
exit(-5);
}
close(sockfd);
}

int listener(char *path)
{
struct sockaddr_un unsock = {0};
struct sockaddr_un remote = {0};
int sockfd;
socklen_t len;

sockfd = socket(AF_UNIX,SOCK_STREAM,0); /* AF_UNIX for local domain sockets */
if(sockfd == -1){
perror("socket");
exit(-1);
}

unlink(UDS);
bzero(&unsock,sizeof(unsock));
unsock.sun_family = AF_UNIX;
strcpy(unsock.sun_path,UDS);
unsock.sun_len=SUN_LEN(&unsock);

/* Binding to a pathname creates the reference file in the file system */
if(bind(sockfd ,(struct sockaddr *)&unsock,SUN_LEN(&unsock)) == -1){
perror("bind");
exit(-1);
}

if (listen(sockfd, 5) == -1) {
perror("listen");
exit(1);
}
len = SUN_LEN(&unsock);
getsockname(sockfd,(struct sockaddr *)&unsock,&len);
printf("bound name = %s, returned len = %d\n", unsock.sun_path, len);

for(;;){
socklen_t len = sizeof(struct sockaddr_un);
int fd,sendfd;

fd = accept(sockfd ,(struct sockaddr *)&remote,&len);
if(fd == -1 ){
perror("accept");
exit(-2);
}
printf("Accepted a connection\n");

/* Open the file . This returned fd of the file is passed to the client */
sendfd = open(path,O_RDONLY|O_CREAT,0755);
if(sendfd == -1 ){
perror("open");
exit(-3);
}
send_connection(sendfd,fd);
close(sendfd);
}
}

int main()
{
listener("./test.txt");
return 0;
}

client.c

#define UDS "domain_socket"

int receive_fd(int fd)
{
struct msghdr msg;
struct iovec iov;
char buf[1];
int rv;

union
{
struct cmsghdr cm;
char control[CMSG_SPACE(sizeof(int))];
} control_un;
struct cmsghdr *cmptr;

iov.iov_base=buf;
iov.iov_len=1;

msg.msg_name=NULL;
msg.msg_namelen=0;
msg.msg_iov=&iov;
msg.msg_iovlen=1;

msg.msg_control=control_un.control;
msg.msg_controllen=sizeof(control_un.control);

rv = recvmsg(fd,&msg,0);
if(rv == -1){
perror("recvmsg");
exit(-1);
}
else if(rv > 0){
cmptr = CMSG_FIRSTHDR(&msg);
if(cmptr->cmsg_type != SCM_RIGHTS){
printf("Unknown control info\n");
exit(-3);
}
return *((int *)CMSG_DATA(cmptr));
}
else
return -1;
}

int sock_dgram()
{
int s, t, len;
struct sockaddr_un remote;
char str[100];

if ((s = socket(AF_UNIX, SOCK_STREAM, 0)) == -1) {
perror("socket");
exit(1);
}

printf("Trying to connect...\n");

remote.sun_family = AF_UNIX;
strcpy(remote.sun_path, UDS);
remote.sun_len=SUN_LEN(&remote);
len = strlen(remote.sun_path) + sizeof(remote.sun_len) + sizeof(remote.sun_family);
if (connect(s, (struct sockaddr *)&remote, len) == -1) {
perror("connect");
exit(1);
}
printf("Connected ..\n");
return s;
}

void reader(int fd)
{
char ch;
while(read(fd,&ch,1))
write(1,&ch,1);
}
int main()
{
int fd,passfd;

fd = sock_dgram();
passfd = receive_fd(fd);
if(passfd != -1)
reader(passfd);

return 0;
}

Friday, October 31, 2008

Dtrace Basics

DTrace is a comprehensive dynamic tracing facility that can be used by administrators and developers to examine the behavior of both user programs and of the operating system itself. With DTrace we can explore our system to understand how it works, track down performance problems across many layers of software, or locate the cause of aberrant behavior. It is safe to use on production systems and does not require restarting/recompiling either the system or applications.

We write D scripts which consist of the probe desctiption , predicates and actions to be taken :
probe description
/predicate/
{
actions
}

When we run the D script , we get results based on the probe desciptions ( the actions are executaed based on the predicate filter ). Think of probes as events: a probe fires when the event happens. Let's take a simple D script example,example.d :

syscall::write:entry
/execname == "bash"/
{
printf("bash with pid %d called write system call\n",pid);
}

Here the probe description is syscall::write:entry , which describes the write system call.The predicate is execname == bash , execname is a builtin variable which contains the executable name and here we proceed with the actions only when the string matches.The action statements contain a builtin function print.


Providers/Probes

To list all of the available probes on your system, type the command:
# sudo dtrace -l

It might take some time to display all of the output. To count up all your probes, you can type the command:

#sudo dtrace -l | wc -l
22567

If you look at the output from dtrace -l in your terminal window,each probe has two names , an integer ID and a human-readable name. The human readable name is composed of four parts.When writing out the full human-readable name of a probe, we write all four parts of the name separated by colons like this:

provider:module:function:name

You might note that some fields are left blank.A blank field is a wildcard and matches all of the probes that have matching values in the parts of the name that you do specify.

Now let's look a little deeper. The probe is described using four fields, the provider, module, function, and name.

* provider—Specifies the instrumentation method to be used. For example, the syscall provider is used to monitor system calls while the io provider is used to monitor the disk io.
* module and function—Describes the module and function you want to observe
* name—Typically represents the location in the function. For example, use entry for name to instrument when you enter the function.

Note that wild cards like * and ? can be used. Blank fields are interpreted as wildcards.Below table shows a few examples :

Probe Description Explanation
syscall::open:entry entry into open system call
syscall::open*:entry entry into any system call that starts with open (open and open64)
syscall:::entry entry into any system called
syscall::: all probes published by the system call provider

A predicate can be any D expression.The action is executed only when the predicate evaluates to true.Below table shows some examples :
Predicate Explanation
cpu == 0 true if the probe executes on cpu0
pid == 1029 true if the pid of the process that caused the probe to fire is 1029
execname != "sched" true if the process is not the scheduler (sched)
ppid !=0 && arg0 == 0 true if the parent process id is not 0 and first argument is 0

The action section can contain a series of action commands separated by semi-colons (;).Below table provides some examples :
Action Explanation
printf() print something using C-style printf() command
ustack() print the user level stack
trace print the given variable

Note that predicates and action statements are optional. If the predicate is missing, then the action is always executed. If the action is missing, then the name of the probe which fired is printed.

Below links provide references for different parts of a probe.
List of providers
List of functions
List of aggregating functions
List of variables
List of built-in variables

Examples

pid provider
------------
Example Explanation
pid2439:libc:malloc:entry entry into the malloc()in libc for process id 2439
pid1234:a.out:main:return return from main for process id 1234
pid1234:a.out::entry entry into any func in 1234 that is main exec
pid1234:::entry entry into any function in any library for pid 1234

You can limit the number of probes enabled by modifying the probe description.
Probe Description Explanation
pid$1:libc::entry/div> Limit to only a given library
pid$1:a.out::entry/div> Limit probes to non-library functions
pid$1:libc:printf:entry Limit probes to just one function

Here is the command you can run to print all the functions that process id 1234 calls:
# dtrace -n pid1234:::entry

Modify the script to take the process id as a parameter. Your script will now look like:

#!/usr/sbin/dtrace -s
pid$1:::entry
{}

script to find the stack trace when the program makes the write system call. Note that you need to run this with the -c option.

#!/usr/sbin/dtrace -s
syscall::write:entry
{
@[ustack()]=count();
}

The syscall Provider
--------------------
This is probably the most important provider to learn and use because system calls are the main communication channel between user level applications and the kernel.

To list all the occurrences of the probe when it was fired and give information about the system calls at entry into the system that are performing a close(2) system call, use the following script:

# dtrace -n syscall::close:entry

To start to identify the process which sent a kill(2) signal to a particular process, use the following script:

#!/usr/sbin/dtrace -s
syscall::kill:entry
{
trace(pid);
trace(execname);
}

The proc Provider
-----------------
Trace all the signals sent to all the processes currently running on the system:

#!/usr/sbin/dtrace -wqs
proc:::signal-send
{
printf("%d was sent to %s by ", args[2], args[1]->pr_fname);
system("getent passwd %d | cut -d: -f5", uid);
}

Add the conditional statement (/args[2] == SIGKILL/) into the script and send SIGKILL signals to different processes from different users.

#!/usr/sbin/dtrace -wqs
proc:::signal-send
/args[2] == SIGKILL/
{
printf("SIGKILL was sent to %s by ", args[1]->pr_fname);
system("getent passwd %d | cut -d: -f5", uid);
}

Here you can see the introduction of pr_fname, which is part of the structure of psinfo_t of the receiving process.

References :

Dtrace @ OpenSolaris
Dtrace inventor blogs
Big Admin Page
Dtrace Guide

Thursday, July 3, 2008

gdb equivalent commands on dbx

DBX debugger is found on the Solaris & AIX platforms . Since we have different commands for dbx & gdb , the other most popular debuger , this note is for people who want to see the gdb commands for/on dbx.

DBX dosen't support command completion and abbreviation like gdb . We have other ways to make it work a bit like gdb.dbx does have a gdb mode ( gdb on ),but it lacks some of the gdb commands.Below I try to give the most commonly used commands for the 2 debuggers.For all the commands , the dbx command is on the left of the ":" and the gdb equivalent command on the right of the ":".

Reading Core files

dbx - core : gdb -c core # Reading the core file.
dbx - pid : gdb -p pid # dbx can find the program automatically.

Logging

dbxenv session_log_file_name file : set logging # logging o/p to a file
dbxenv session_log_file_name : show logging

Debugging Information Support

stabs (SUN), dwarf2, -g -O : stabs (GNU), dwarf2, -g -O
Macro support (-g3) : Macro support (-g3) # Macro debugging support

Sun Studio compilers don't generate debug info for macros, though.

Debugging Programs with Multiple Processes

dbxenv follow_fork_mode parent : set follow-fork-mode parent
dbxenv follow_fork_mode child : set follow-fork-mode child
dbxenv follow_fork_mode ask : -

Breakpoints

stop in function : break function
stop at [filename:]linenum : break [filename:]linenum
stopi at address : break *address # Stop at a instruction address
status [n] : info breakpoints [n] # Show all breakpoints
delete [breakpoints] : delete [breakpoints] [range ...]# delete breakpoint
delete all : - # delete a breakpoint

Examining the Stack

where [n] : backtrace [n] # Shows the stack backtrace
frame [n] : frame [args] # goto a particular frame
dump : info locals # dump info about local variables

Examining Data

print -f expr : print /f expr
Array slicing (p array[2..5]) : Artifcial arrays (p *array@len)
display : display
x addr [/nf] : x/nfu addr
regs : info registers
regs -f | -F : info all-registers
print $regname : info registers regname ...

Memory access checking

check -access : set mem inaccessible-by-default [on|off]
check -memuse : set mem inaccessible-by-default [on|off]
check -leaks : set mem inaccessible-by-default [on|off]

Examining the Symbol Table

whereis -a addr : info symbol addr
whatis [-e] arg : whatis arg
whatis [-e] arg : ptype arg
whatis -t [typename] : info types [regexp]
modules -v / files : info sources

Also,its better to set up aliases to commonly used dbx commands, to their gdb quivalents.I am using the below ~/.dbxrc file :
--
dalias alias=dalias

alias b="stop in" # set breakpoint in a function
alias sa="stop at" # set breakpoint at a line number
alias st=status # show breakpoints, numbered
alias del=delete # delete a breakpoint

alias cka="check -access" # check for invalid memory access
alias ckl="check -leaks" # check for memory leaks

alias r="run " # start the program running at its beginning
alias q=quit

alias w=where # show frames in call stack
alias bt=where # show frames in call stack
alias u=up
alias d=down
alias f=frame

alias l=list # list some source lines
alias lw="list -w" # from 5 before current line to 5 after
alias p=print # print value of variable or expression
alias ptype=whatis -t # find declaration of variable or function
alias wi=whatis # find declaration of variable or function

alisa ni=nexti
alias si=stepi
alias n=next # cont to next stmt in same function
alias s=step # step INTO the function about to be called
alias su="step up" # cont to next stmt in parent function
alias c=cont # continue running

alias h=history