C++/difference between turbo c++ and Unix c++
I am graduate in Mathematics.I have learnt programming in Turbo C++. I was wondering whether there were difference between Turbo C++ and programming C++ in Unix environment.Is programming c++ in Unix same as programming c++ in Linux environment.Can You tell the difference between the three.
Thank you for your time and patience.
Firstly I would like to point out that there are differences in programming in Turbo C++ and Turbo C++ - it depends on which version you are using. See for example the Wikipedia article on Turbo C++ at http://en.wikipedia.org/wiki/Turbo_C++
Secondly it depends on what you mean by "programming in". If you mean the whole development experience then yes, typically there will be differences between development environments depending on the tools available (and which ones you are using if you have a choice <g>). If you mean just the code written and built using a C++ compiler then maybe not so much - see the following points.
Thirdly, it also depends on what you are using C++ for. If you are doing systems programming then again yes there are going to be differences between the operating system APIs (application programming interfaces) for MS-DOS, Windows, UNIXen (which differ among themselves to some extent), Linux, etc. If you are sticking to standard C++ then there will not be so many differences between compilers (see following).
The forth area of difference between C++ implementations is exactly what variation/subset/superset of C++ is supported.
The C++ language was standardised in 1998 as an ISO/ANSI standard. Since then the standard (a document) has been revised for bug fixes and clarification in a technical corrigenda in 2003 and has had a "non-normative" (meaning it does not have to be included for an implementation to be standard conforming) library technical release in 2006 that added some nice additional functionality (many useful for mathematicians). See http://www.open-std.org/jtc1/sc22/wg21/
for some online documents. The C++ standard itself can be purchased in book form "The C++ Standard" published by Wiley or can be obtained in PDF format from the ANSI online shop for 30 USD (see http://webstore.ansi.org/ansidocstore/product.asp?sku=INCITS%2FISO%2FIEC+14882-2
) (warning! Long URL, may have wrapped). I should point out that this is intended for compiler writers and library implementers and the like and so is not an easy read. It can take quite a bit of time to find the information you are after.
Since the release of the C++ standard in 1998 most compilers have been trying to improve their implementations of the core C++ language and the C++ standard library towards the ISO/ANSI standard. Most if not all have not reached full compliance with the standard. However modern compilers tend to be close enough that they will work (in the main) with modern libraries such as the Boost library collection (see http://www.boost.org/
). In fact the Boost libraries are written assuming a high degree of C++ standard compliance so tend to give compilers a good works out. They have some regression tests that you can look at, see for example the summary table at http://engineering.meta-comm.com/boost-regression/CVS-RC_1_33_0/user/summary_rel
(warning! Long URL, may have wrapped).
Older compilers of course do not conform to the standard so well if at all (really old compilers will of course pre-date the standard), and the variation among what is supported by different compilers tends to be wider for older compilers.
Then again even standard compliant compilers vary in some of their details. This is usually a consequence of C++ being quite close to the hardware in some senses and therefore the need for it to create efficient code for different architectures. The most obvious differences are in the sizes of the basic types. All the standard says is that the notionally larger types only need to be as large as the immediate notionally smaller variant. Further the char type always has a size of 1 (i.e. sizeof(char) == 1) and may not necessarily be 8-bits, although on most modern desktop, workstations and servers it will be. Unlike the other integer types char may be signed or unsigned by default (the other integer types default to signed). This I think was a consequence of existing practice and not wanting to break too much existing code!
From this it follows that the size of a short int is at least a large as the size of char, the size of int is at least the size of short int and the size of long int is at least the size of int and similar for the unsigned variants. Similarly, for the floating point types the precision of a double is at least that of a float and the precision of long double is at least that of a double. The int type is meant to be of a size that is natural for the processor, so one would expect it to be 32-bit on a 32-bit processor and 16-bits on a 16-bit processor. The other types which have a size that can vary are pointer types. On a 32-bit machine they would probably be 32-bits in size for example, but often were either 16 or 32 bits on a 16-bit machine depending on the address type used (often selected by using compiler options). However things are not so clear on a 64-bit processor. Sizes of the integer types vary between compilers. See for example http://www.unix.org/whitepapers/64bit.html
and note the inclusion of the common but non-standard type long long as well as int32 and int64 (or __int32 and __int64).
An area of OS functionality which tends to influence compiler implementations rather than the underlying system architecture is in the area of wide character support. In C++ such characters are supported by the type wchar_t which, unlike C, is a proper built in type like char or int (in C it was a type alias for one of the other integer types such as unsigned short, and some, usually older, C++ compilers still follow C in this area). These days many OSes use some form of UNICODE encoding internally. Win32 for example uses UTF16, and Linux uses UCS4 (UFT32). Thus a compiler targeting a Win32 system will probably use a 16-bit wchar_t whereas a compiler targeting a Linux platform would probably use a 32-bit wchar_t.
The final area of difference is that of additional vendor supplied libraries, frameworks, components and additions to the C++ language. Microsoft's compilers for example have extensions to help with using COM, and the newest one has support for their .NET technology in the form of a new Microsoft-invented variation of C++ called C++/CLI (CLI is the Common Language Infrastructure - part of the underlying .NET technology), which they submitted to ECMA for standardisation. Borland used to add special support for routing Windows messages to member functions or similar (I forget the details it has been about 15 years or so since I used a Borland compiler in any serious way - sorry). Borland probably also supply their own libraries and components (the feature chart at http://www.turboexplorer.com/cpp
implies quite a lot of additional features above the C++ compiler and standard library).
So what is the minimum you require to build a standard C++ project: a text editor (e.g. notepad, Emacs, Vi) to write the source code files, a C++ compiler and a linker/link loader to compiler and link the source code into executable files and a runtime support library and a C++ standard library implementation, and a console and command shell (cmd.exe, bash, sh, csh etc.) to enter the project build command lines and built executable execution commands.
Even with these the editor usage and command line syntax is likely to change from product to product. Anything else is definitely likely to change not just from platform to platform but from compiler to compiler. For example you will probably find that the Borland, Microsoft, Dev-C++ and Eclipse IDEs (integrated development environments) provide similar functionality that makes developing applications easier. However they are different tools to achieve similar if not the same thing. Thus each will have their own ways of doing things with differing strengths and weaknesses, niggles and likes.
If you step outside of what is included in standard C++ then there is a greater variety of differences. The main desktop and server operating systems fall into two obvious major camps: Microsoft and POSIX (the various UNIXen, Linux etc.). The Mac was also an obvious other but with MacOS X it does at least have a UNIX-rooted OS underpinning it. If you are lucky Borland will provide some common POSIX/UNIX/Linux system functions in some form as additions to their runtime or C++ standard library (Microsoft does this for some functions).
In addition, for additional 'fun' UNIX (including Linux) has various flavours and API sets in addition to POSIX, for more information see the FAQ at http://www.faqs.org/faqs/unix-faq/faq/part6/
The two systems' APIs tend to be quite different. Often there are equivalent features and functionality - iterating through a file system directory for example - and the differences can be smoothed over by using code that wraps the raw OS interfaces into a common interface - the Boost file system library for example provides a common way to iterate through a directory. In some cases there is no equivalent functionality on one OS, for example in POSIX you can fork process (see http://www.opengroup.org/onlinepubs/000095399/functions/fork.html
) which you cannot do in Win32 because such an action is not supported (however the Visual C++ runtime library does provide a set of spawn functions that can be used in place of fork/exec call sequences in POSIX code). In other cases the functionality is sort of equivalent but very different - access control for example. In UNIX/Linux you have the owner/group/world model whereas in Win32 you have access control lists. Trying to match the two (or more) systems into some commonality in such cases is interesting to say the least!
Another large area of difference is between GUIs. On Windows of course we have the GUI built into the OS. UNIX and Linux tend to use X window system implementations, often with additional desktop functionality provided by other frameworks such as Motif, OpenLook, Gnome or KDE. However there are various C++ frameworks that wrap the base functionality up in a more C++ friendly form. On Windows the most famous/infamous is the MFC (Microsoft Foundation Classes), which Borland licensed and included with their C++ offering at one point. Now they seem to provide their own - the VCL. The problem here is that they tend to be platform specific (VCL might support Linux in one form or another - CLX?). If you wish to write cross platform GUI code then it is best to start out using a cross platform GUI framework. Two examples are Qt from Trolltech (which is used by KDE) (http://www.trolltech.com/products/qt/
) and wxWidgets (http://www.wxwidgets.org/
So, if you are writing code that uses only standard C++ and additional libraries and frameworks available across the operating systems you are targeting then your code will probably port between the various targeted systems fairly easily - with only such details as integer type sizes (i.e. range) and floating point precision to worry about.
If you are writing code that uses platform specific APIs, frameworks or libraries, or compiler specific features then you are going to have trouble porting between platforms.
As to the tools available: well under Windows the toolsets tend to be much better integrated and Borland of course is one of the best at doing this. On the other hand many tools are written to take advantage of the commonality between UNIXen/Linux and Windows is often supported only because it is very popular if it is supported at all. You are more likely to have to resort to command lines, scripts, make files and the like on UNIX/Linux.
If you do intend to target multiple platforms then you most definitely need to have a very good test suite available for your application if it is in anyway non-trivial or mission/security/safety critical, which is also ported. You should of course have a good test suite for such applications anyway but you may need to add extra tests to make sure platform differences are not causing problems.
Additionally try to find existing libraries that support as many of your target platforms as possible for as much non-C++-standard functionality as possible - no point making work for yourself! If you do have to write code using non-standard features then keep them as encapsulated as possible – wrap them up in functions and/or classes and keep them separate from the rest of the code by using these wrapper functions and/or classes. Then you will stand a better chance of porting the functionality to another platform without too much pain.
If you would like more specific information on some aspect of this (rather large) subject then please ask further questions. Also, in such areas internet search engines are your friend - you will be amazed at the amount of online technical information on development topics (if you were not already aware of such information).