first of all thank you for answering my prevoius questions and helping me
i would like to know
1) what is the difference between
#include<bitset> and #include<bitset.h>
if i use #include<bitset.h> it gives error why ?
please tell me why u hav used #include<iostream> why not #include<iostream.h>
2)what is the difference between array and bitset
3)while sending bitset data to another computer should i store the bitset data in another variable ?
i m using vc++ 6
1/ bitset is the name of the header file you have to include to use std::bitset. bitset.h is nothing to do with standard C++, so probably does not exist, or refers to a pre-standard implementation from the Standard Template Library from Hewlett Packard – I forget the details.
Similar for <iostream> - <iostream> is the standard C++ header. In this case however there existed a pre-standard version as part of the pre-standard IOStreams library (sometimes called traditional IOStreams) and this was called <iostream.h>.
All C++ standard library headers have no extension. Those that contain declarations and definitions from the C library have the prefix c (for example, the C library header <stdio.h> is <cstdio> in standard C++).
For backward compatibility some implementations, especially older implementations such as VC++ 6, include both pre-standard and standard header file names. Those that are also C compilers of course have both the C header and C++ headers.
Another difference between pre-standard and standard implementations of C++ library features is that wherever possible standard C++ library features live in the namespace std. As namespaces are a fairly new feature, old versions of C++ library features do not in general place named C++ library items in a namespace.
2/ Eh? An array is a built in feature, unless you mean some class you have available to you. A bitset is a class template that is part of the C++ standard library and is designed to make it fairly easy to work with sets of bits of any (fixed) size. It is probably implemented as an array of larger storage units (on most modern architectures this must be at least an array of bytes – 8-bit unsigned char most likely - as this is the smallest addressable unit on such processors). std::bitset is a convenience, as are all the library features. You could spend time writing the operations it provides yourself. However if it does things you need efficiently enough why waste time re-inventing the wheel?
Bitset has to be slightly different from say a std::vector<bool> in that bool is often represented as a whole byte (char) as a compromise between space and efficiency. Bit operations within machine words can be much slower than reading or setting the value at an address. For example the question: is this byte 1? can be answered quicker than the question: is this bit of this byte 1?
3/ Yes. There is no requirement that the implementation of std::bitset on your machine with your operating system and your compiler and standard C++ library is the same or compatible with mine. However you can input and output a bitset from/to a stream. This, as I showed before, works with the bits shown as a string of ‘1' and ‘0' characters. You could therefore locate a socket stream implementation, or you could just use the construct from string constructor and to_string member function.
It is probably better, at least at first, to send as a string of ‘0' and ‘1' characters rather than as a multi-byte value (10 bits requires at least 2 bytes to send). The reason is that different processors define the bytes to be arranged in different orders to make longer word sizes. This is known as the endianness of the processor. Popular processors are generally either big endian in which the most significant byte of the word is stored at the lowest memory address, or little endian in which the least significant byte of the word is stored at the lowest memory address. Intel x86 processors are little endian. Sun SPARC, Motorola 68K and the PowerPC family of processors are big endian.
Alternatively you can look at using the definitions of the htons and ntohs (host to network short and network to host short) macros to convert the values from host to network ordering and back again at the other end. Note that these are non-standard C++. They are part of the socket implementation, which under Windows is called Windows Sockets or winsock, current version is 2. You need to include Winsock2.h and link with Ws2_32.lib and uses Ws2_32.dll – this is what is says in the documentation however I would be surprised if the libraries are in fact needed just to use these macros. On the other hand you will most likely be using sockets somewhere down the line at which point the libraries would be needed.
In this case you would use the std::bitset::to_ulong member function to get the bits as an unsigned long, cast it to an unsigned short (or short), use htons to convert to network byte order and send to the receiving machine. On the receiving machine you would use ntohs to convert to that machine's host byte ordering, and construct a std::bitset from this value.
This is all I am going to say on this subject so please do not ask further follow-ups on the same theme, as we are drifting into areas which will take me much too long to explain and you should do the research yourself – read books, look at other people's code etc...In this respect, learning to use a search engine effectively is a good move. As is learning to use the MS developer network and its library (see http://msdn1.microsoft.com/en-gb/default.aspx
as a starting point).