Dec 27, 2010

My first test to ASP.NET MVC framework

This week I started to read a book about the new Microsoft new web development implementation MVC framework. When I started to read about the framework, I really like the test of the MVC framework.
My first intention was to just to skim at the technology, however once I started to read a Book
Pro. ASP.NET MVC, Apress publisher by Steven Sanderson
, the book grabbed me and I cannot leave it out. Now I think the book is in its second edition.
The author of the book Sanderson has done great job at explaining the thing with very interesting and attracting small application. Although I couldn’t found the book I read about 4 chapters from an eBook and it really impress me.
The second thing is I was astonished by the way Microsoft designing the framework. I mean although the technology have been quite active by other vendors such java, Ruby for a while, I though it contradicts the idea of web forms. In MVC technology you did a lot of thing using hardcoding and in addition you have a lot of control over the html controls which web forms lucks.
When I was in college I have played a long time with PHP, one thing I like and hate about PHP was I did a lot of code to simply display a grid page with paging from database. In one side I feel confident and excited about the control I have over the html tags, CSS and the understanding about the inner working of the page. This gives a lot processing to be done before coding in my mind and that is the excitement of programming. In the other side when I look for production, the same grid page can be done in just two minutes using ASP.NET web forms.
In the past few years we have been played using ASP.NET web forms after the classic ASP. The good side of the web form is rapid application, however the control over the page have been claimed by many developers. Now Microsoft released their MVC framework now in its version 2. With the MVC I have now the control on page where I lost in web forms. In addition many features of the web form are not used in this platform. The major emphasis was scalability and it has clean separation of layers. The separation of layers and test driven development (TDD) is one of its cool features. Routing also provide a clear web page URL as well as good management over the View layer.
In addition, the request response mechanism has a little bit difference from the past. The controller and view plays a major role here, I really like the idea of controllers they receive request and respond it with its corresponding view.
Finally I would like you to test hello world using ASP.NET MVC. Here the operation done using controller and view since we didn’t have any valid Model.


//Controller class
Using System.web.Mvc;
Public class HelloworldController:controller
{
public ViewResult HelloWorl()
{//where the View (viewname,model name)

ViewData[“Hw”]=”Hello world from ASP.NET MVC”;
return View(“helloW”);
}
}
//View
//Our view may look like this, when designing the view I assume we are using the master page
//helloW.aspx
<%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage" %>

HelloWorld


HelloWorld


<% =ViewData["Hw"] //reference using the ViewData name index %>



The example just an illustration what it looks like it just displays “hello world from ASP.NET” it is our hello world example, even more illustrated Visual studio creates a template for sample hello world with its entire layer separated.

Dec 21, 2010

Simplified XML Based Phone Book Database using LINQ

Introduction

One day I was thinking how ADO .NET utilizes the database accessibility using dataset and ORM tools. One thing that impresses me was the power of XML technology. Using a plain XML, we can use almost all features of relational database (thanks to their inventor). Even the introduction of XML changed the way programs are build, the structure of configuration files, etc. In .NET, the dataset class can read and write its temporary data to XML. When I was planning to develop a small application using these features, I thought of developing the application using ORM operation.

When I was searching for table mapping from dataset to object, the framework was incapable of doing as LINQ to SQL do. So finally I decided to develop my own application which maps object to dataset data and vice versa and finally persists the data into an XML file.

In this application, I would like to share some information from the application I made using XML as database backend and data table to object mapping on dataset using LINQ.

Background

As I have said in the introduction part, I was looking for data persistence using XML from dataset. Although I found a lot of snips, I found out data manipulation in the dataset using LINQ easy and powerful which I would like to share.

Concepts

The general function of the application is to store contacts detail and retrieve contacts from the database, in our case the XML file. The application is designed with separation of layers into three layered architecture, where each layer is separated from the other.

  • The data layer where the data is persisted on XML file
  • The data access layer and Business layer, in our case the DAL and BL is defined in the DAL class where data is mapped from XML file across the dataset to the contact object as well as data is saved from the active contact class through Dataset to XML.
  • The UI where the data presentation is developed using any technology such as Windows Forms, WPF, web, etc. In our case, I develop the UI layer using Windows Forms as an illustration.

Here the main concepts behind the code lies here, once data is loaded from the XML file, it is loaded to dataset, then from the dataset the data is mapped to appropriate contact objects. Like any other ORM tools such as LINQ to SQL, we are doing the mapping manually from dataset to objects. After that, once objects are mapped, the business functionality is manipulated using LINQ on the objects to perform various operations.

Using the Code

Here the code is divided into two - the first part is designing the data access layer class with business layer which performs all the operations. The second part is the UI which is designed in such a way as to perform various operations of the phonebook application. Here in my application, although I have used only Windows Form for the UI layer, it is easily extended to other UIs like WPF, Web Forms, etc. The DAL layer consists of the contact class with dataset:

Data Reading

ReadData() method reads data from the XML file to the dataset, and then data row in the dataset is mapped to contact object.


//read  data from the xml file and
private void ReadData()
{
try
{
ds.Clear();
Contacts.Clear();
//checks whether the file exists otherwise it create blank file
if (!System.IO.File.Exists(FileName))
{
System.IO.File.Create(FileName);
}
else
{//read data to dataset then map the data row to contact object
ds.ReadXml(FileName, XmlReadMode.IgnoreSchema);
Contact c;
foreach (DataRow dr in ds.Tables[0].Rows)
{

c = new Contact() { SN = Convert.ToInt32(dr[0]),
Name = dr[1].ToString(), Tele = dr[2].ToString(),
Email = dr[3].ToString(), Gender = Convert.ToChar(dr[4]) };
//added to the contacts collection
Contacts.Add(c);
}
}
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
//throw (ex);
}
}

Here the point is that the dataset is used in mapping the data from XML to the contact object. Finally the contacts collection holds the data information of contact object, at the point querying the object using LINQ becomes easy.

Business Operation

Now let us look at other typical operations once the data are loaded into the contacts list, say we want to get all contacts.


//using simple LINQ operation we can get all the contacts
public List GetContact()
{
var cont = (from cc in Contacts
orderby cc.SN
select cc).ToList();
return cont;

}
//for single contact
public Contact GetContact(int id)
{
return Contacts.Where(cc => cc.SN == id).SingleOrDefault();
}

Again if we look at the typical CRUD operation, the data manipulation is also maintained using a simple operation. However, unlike the typical ADO.NET based database application, all operations are done simply on the list.


  //to delete contact from the list
public Boolean DeleteContact(int id)
{ //remove the contact with the id from the contacts list
return Contacts.Remove(GetContact(id))?true:false;
}
//to add contact into list
public void AddContact(Contact c)
{
//add the new contact to the Contacts list
Contacts.Add(c);
}
//clear the list
public void clearlist()
{
//clears the contacts from the collection
Contacts.Clear();
}

Data Saving

Finally, only the Contacts object list is persisted to the XML file when the Save() method is called. This look like a typical LINQ to SQL operation, where data is saved to the database only when the submitchange() method of the datacontext object is called. Here our save method looks like:

//maps  the object from the contacts list to datarow in the dataset
//the dataset is saved to the xml file
public Boolean Save()
{

try
{
ds.Clear();
DataRow dr;
foreach (Contact c in Contacts)
{
dr = ds.Tables[0].NewRow();
dr[0] = c.SN;
dr[1] = c.Name;
dr[2] = c.Tele;
dr[3] = c.Email;
dr[4] = c.Gender;
ds.Tables[0].Rows.Add(dr);
}

ds.WriteXml(FileName, XmlWriteMode.IgnoreSchema);
return true;
}
catch (Exception ex) {

return false;
}

Here the point is reverse to the read operation, where the data is mapped from object to datarow, then finally the data is saved from the dataset to XML file via WriteXml() method of dataset object. Finally, the layer is compiled as class library to be referenced from the UI layer.

The UI Layer

Since the business operation layer is done on the DAL, the operation of UI layer is simply to present the data. The main operation is just referencing the class layer and implements the functionality of the phonebook application in your favorite UI. In my case, I have chosen to implement using the Windows Forms although it is extendable with a little similar manner to web application and WPF.

When the Windows Form is loaded, the contacts data is bound to the bindingsource, where the binding source is used throughout the application for binding the Windows controls on the form and the data grid.


using  Book;

//....
private void Form1_Load(object sender, EventArgs e)
{
//gets all contact in object form
bindingSource1.DataSource = dall.GetContact();
bindsource();
bindControls();
}
public void bindsource()
{ dgvaddress.DataSource = bindingSource1;
}

Many of the binding source operations are pretty straight forward. I use simply like methods of the binding source like:

  • Bindingsource1.MoveFirst()
  • Bindingsource1.MoveNext()
  • Bindingsource1.AddNew()

etc.

Finally when data editing and manipulation is finished, our DAL class method save is called to persist the data.

Collapse
private  void btnsave_Click(object  sender, EventArgs e)
{ //save data in the grid to xml file
AddGridValuetolist();
if (dall.Save())
MessageBox.Show("success");
else
MessageBox.Show("error");
}

When the save button is clicked, first the data in the grid is mapped to the contact list, then each contact is added to Contacts list in the DAL layer class, after that, the save method of the DAL class is called to save the list data into the XML file.


  // adds the  datarow from the grid to contact list
public void AddGridValuetolist()
{
Contact c;
dall.clearlist();
foreach (DataGridViewRow dr in dgvaddress.Rows)
{
if (dr.Cells [2].Value != null)
{ //mapping from datagridviewrow into contact object
c = new Contact()
{
SN = Convert.ToInt32(dr.Cells[0].Value),
Name = Convert.ToString(dr.Cells[1].Value),
Tele = Convert.ToString(dr.Cells[2].Value),
Email = Convert.ToString(dr.Cells[3].Value),
Gender = Convert.ToChar(dr.Cells[4].Value)
};
dall.AddContact(c);
}
}
}

Conclusion

As I have explained in detail, the whole point of the article is focused on creating object to dataset mapping. Here we have created our own way of object persistence to database and data from the XML file to object via the dataset. In this case, if you are familiar with LINQ to SQL, the datacontext class creates a mapping of tables in database to objects and vice versa. In our application, the mapping of objects to data in XML is done using dataset and LINQ queries. When I was developing the application, my main intent was to develop XML based database. However, when I look to some stuff, I couldn’t find any way ORM like tool to work against dataset. Still the LINQ to dataset provides me data listing operation just like any type collection with the data table and data row, whereas the capability for object mapping like in LINQ to SQL is unavailable in LINQ to dataset. This leads me to develop an ORM based application using LINQ to Dataset finally to XML file.

source code

Nov 2, 2010

DRY (don’t repeat yourself) Principle in Context of Data Access Layer

Introduction

Today most objected oriented applications are based on some common architectural design. Today the major architectural pattern is evolved from the pillars of OOP. Besides the pillars of the OOP the basic objected oriented design principle plays a major role in designing the architect of a system. Here in this article I like to share information how DRY principle plays a major role in designing OOP based system.

The Principle

“Avoid duplicating code by abstracting out things that are common and placing those in a single location”.

Dry is simply about abstracting one requirement in one place. Once you put all duplicate code in one location you are capable for maintaining flexibility of your code. If we have a common operation among different types of modules or objects in our system, rather than typing again and again similar code we just generalize the code to make a single module. Then we try to make call (use) that module when ever needed.

Let us take a simple classic example, assume we want to calculate a permutation of number (nPr) using a procedural like in C. We were told to do the common functionality in one common function, the call that function on need base. In our case since permutation is i.e. nPr = n!/(n-r)! . The first step was, to find the factorial of n then divide to factorial of (n-r)

#include ….

Int factorial(int n)

{// ……factorial of a number

}

int main()

{ int p,n,r;

scanf(“%d%d”,&n,&r);

P = factorial(n)/factorial(n-r);

printf(“Permutation of %d P %d = %d”,n,r,p);

return 0;

}

Here instead of simply calculating factorial of n separately in the main function code and (n-r) in the main function, we separated the repeated code in common function factorial.

DRY principle also asserts like that, it separates the repeated value in common (single) place, then we simply tend to refer to that code. The main advantage behind this is

· Code maintainability you don’t have to change every module you just have one module to change

· Requirement is also defined in one location so that whenever there is modification to requirement you can simply spot them

Class Database

{

public readonly static string connectionString;

}

class Movie

{

private int Id { get; set; }

public string Title { get; set; }

public string Type { get; set; }

public double Rate { get; set; }

private string connstr;

public Movie(string title, string type, double rate)

{

this.Title = title;

this.Type = type;

this.Rate = rate;

}

public void Save()

{

string sql = "insert into movie(id,title,type,rate) values(" +

this.Id + ",'" + this.Title + "','" + this.Type + "'," + this.Rate + ")";

using (SqlConnection con = new SqlConnection(Database.connectionString))

{

con.Open();

SqlCommand cmd = new SqlCommand(sql,con);

// cmd.Connection = con;

cmd.ExecuteNonQuery();

}

}

public void Update()

{

string sql = "Update movie set(id,title,type,rate) values(" +

this.Id + ",'" + this.Title + "','" + this.Type + "'," + this.Rate + ")";

using (SqlConnection con = new SqlConnection(Database.connectionString))

{

con.Open();

SqlCommand cmd = new SqlCommand(sql,con);

// cmd.Connection = con;

cmd.ExecuteNonQuery();

}

-----}

}

Again Game and movie class also perform these operation in similar manner. The main problem with this design is code is duplicated on all class only with minor changes, this creates problem when any change occurs on the database such as changing from one vendor to other, the whole duplicated code needed to be traced and modified in each class.

This creates a great problem in code maintenance in large project where we have many classes, during migration of our system from one DB vendor to another. Besides the requirement for the application is also defined in different part of the application event thought they remain the same.

DRY Solution

Today one of the solutions employed for this type of problem is to create a generalization class which is the data access layer (DAL). Using the DAL we try to separate the database from the application classes and perform the operation using common class which is the DAL for the CRUD operation. This design is basically based on the DRY principle as a result every class, in our case Game, Video, Music don’t need to implement their CRUD operation they simply send their operation to the DAL class to perform the CRUD operation.

Here the DAL class holds the common database operation done using ADO.NET such as

· ExecuteReader

· ExecuteNonQuery

· Fill Dataset

· ExecuteScalar

Since each operation on the database involves the following one or more of the above ADO .NET methods, we tend to encapsulate these methods in the DAL class rather than repeating same code in every class as well as in a multiple methods.

A short snips for a typical operation may look like

public class DAL

{

public static string connectionstring = "";

public bool ExecuteNonQuery(string sql)

{

int affected;

using (SqlConnection con = new SqlConnection(connectionstring))

{

SqlCommand cmd = new SqlCommand( sql,con);

con.Open();

affected = cmd.ExecuteNonQuery();

}

if (affected > 0)

return true;

else

return false;

}

}

class Movie

{

…..

Private DAL dal;

Public Movie(string title, string type, double rate)

{

this.Title = title;

this.Type = type;

this.Rate = rate;

dal=new DAL();

}

public void Save()

{

// here at this point we build simply the sql queries for saving the movie object into

// database

String sql = “insert into Movie(title,type,rate) values(‘“+this.Title+”’,’”+this.Type+”’,”+this.Rate”)”;

dal.executeNonQuery(sql);

}

//for other operation like delete,update …etc the operation is loosely coupled like this

Public void Delete()

{ ///

/// sql delete operation

///

}

}

Here my point is to explain the DRY principle in context of building DAL class, at this point I simply show a rough small portion of the DAL class. It is assumed that for production environment the class may be build differently with addition of exception handling and other mechanism.

Conclusion

As the dry principle suggests us to remove the repeated code and put on a common place. In the example I try to show a simple program without repeating common code on our database operation.

Although I try to explain the DRY principle in context of the database application, DRY principle is one of the basic objected oriented principles applied in all aspects of OOP. Even when we look at other OOP principles such as SRP they try to avail the DRY principle. So the major function of DRY principle is like many OOP principle lower effort to code maintenance by encapsulating the requirement definition on a single place.




May 5, 2010

Singleton pattern

In software engineering, the singleton pattern is a design pattern used to implement the mathematical concept of a singleton, by restricting the instantiation of a class to one object. This is useful when exactly one object is needed to coordinate actions across the system. The concept is sometimes generalized to systems that operate more efficiently when only one object exists, or that restrict the instantiation to a certain number of objects (say, five). Some consider it an anti-pattern, judging that it is overused, introduces unnecessary limitations in situations where a sole instance of a class is not actually required, and introduces global state into an application.

Implementation
Implementation of a singleton pattern must satisfy the single instance and global access principles. It requires a mechanism to access the singleton class member without creating a class object and a mechanism to persist the value of class members among class objects. The singleton pattern is implemented by creating a class with a method that creates a new instance of the class if one does not exist. If an instance already exists, it simply returns a reference to that object. To make sure that the object cannot be instantiated any other way, the constructor is made protected (not private, because reuse and unit test could need to access the constructor). Note the distinction between a simple static instance of a class and a singleton: although a singleton can be implemented as a static instance, it can also be lazily constructed, requiring no memory or resources until needed. Another notable difference is that static member classes cannot implement an interface, unless that interface is simply a marker. So if the class has to realize a contract expressed by an interface
C#
///
/// Thread-safe singleton example created at first call
///

public sealed class Singleton
{
private static readonly Singleton _instance = new Singleton();

private Singleton() { }

public static Singleton Instance
{
get
{
return _instance;
}
}
}
Another method, that utilizes more of the functions of C#, could be used is as follows.




///
/// Thread-safe singleton example created at first call
///

public sealed class Singleton
{
///
/// Utilizes the get and set auto implemented properties.
/// Note that set; can be any other operator as long as it's
/// less accessible than public.
///

public static Singleton Instance { get; private set; }

///
/// A static constructor is automatically initialized on reference
/// to the class.
///

static Singleton() { Instance = new Singleton(); }
}
Example of use with the factory method pattern
The singleton pattern is often used in conjunction with the factory method pattern to create a system-wide resource whose specific type is not known to the code that uses it. An example of using these two patterns together is the Java Abstract Window Toolkit (AWT).

java.awt.Toolkit is an abstract class that binds the various AWT components to particular native toolkit implementations. The Toolkit class has a Toolkit.getDefaultToolkit() factory method that returns the platform-specific subclass of Toolkit. The Toolkit object is a singleton because the AWT needs only a single object to perform the binding and the object is relatively expensive to create. The toolkit methods must be implemented in an object and not as static methods of a class because the specific implementation is not known by the platform-independent components. The name of the specific Toolkit subclass used is specified by the "awt.toolkit" environment property accessed through System.getProperties().

The binding performed by the toolkit allows, for example, the backing implementation of a java.awt.Window to bind to the platform-specific java.awt.peer.WindowPeer implementation. Neither the Window class nor the application using the window needs to be aware of which platform-specific subclass of the peer is used.

Drawbacks
It should be noted that this pattern makes unit testing far more difficult[6], as it introduces global state into an application.

It should also be noted that this pattern reduces the potential for parallelism within a program, because access to the singleton in a multi-threaded context must be serialised, i.e. by locking.

Advocates of dependency injection would regard this as an anti-pattern, mainly due to its use of private and static methods.

Some have suggested ways to break down the singleton pattern using methods such as reflection in languages such as Java

Jan 25, 2010

Type system

Types can be primitive, and just hold single pieces of information such as an integer, a floating point, or a character or they can be more complicated composite objects which store multiple pieces of information--some combination of data and, at times, even functionality. In computer science, a type system may be loosely defined as providing a way to associate one (or more) type(s) with each value that can be used in a program. Just like spoken languages, computer languages have grammars that give the rules to construct phrases and sentences.

Type checking

The process of verifying and enforcing the constraints of types – type checking – may occur either at compile-time (a static check) or run-time (a dynamic check). If a language specification requires its typing rules strongly (ie, more or less allowing only those automatic type conversions which do not lose information), one can refer to the process as strongly typed, if not, as weakly typed. The terms are not used in a strict sense.

Static typing

A programming language is said to use static typing when type checking is performed during compile-time as opposed to run-time. In static typing, types are associated with variables not values. Statically typed languages include Ada, AS3, C, C++, C#, F#, JADE, Java, Fortran, Haskell, ML, Pascal, Perl (with respect to distinguishing scalars, arrays, hashes and subroutines) and Scala. Static typing is a limited form of program verification (see type safety): accordingly, it allows many type errors to be caught early in the development cycle. Static type checkers evaluate only the type information that can be determined at compile time, but are able to verify that the checked conditions hold for all possible executions of the program, which eliminates the need to repeat type checks every time the program is executed. Program execution may also be made more efficient (i.e. faster or taking reduced memory) by omitting runtime type checks and enabling other optimizations.

Because they evaluate type information during compilation, and therefore lack type information that is only available at run-time, static type checkers are conservative. They will reject some programs that may be well-behaved at run-time, but that cannot be statically determined to be well-typed. For example, even if an expression always evaluates to true at run-time, a program containing the code

if then 42 else

will be rejected as ill-typed, because a static analysis cannot determine that the else branch won't be taken.[1] The conservative behaviour of static type checkers is advantageous when evaluates to false infrequently: A static type checker can detect type errors in rarely used code paths. Without static type checking, even code coverage tests with 100% code coverage may be unable to find such type errors. Code coverage tests may fail to detect such type errors because the combination of all places where values are created and all places where a certain value is used must be taken into account.

The most widely used statically typed languages are not formally type safe. They have "loopholes" in the programming language specification enabling programmers to write code that circumvents the verification performed by a static type checker and so address a wider range of problems. For example, most C-style languages have type punning, and Haskell has such features as unsafePerformIO: such operations may be unsafe at runtime, in that they can cause unwanted behaviour due to incorrect typing of values when the program runs.

Dynamic typing

A programming language is said to be dynamically typed, when the majority of its type checking is performed at run-time as opposed to at compile-time. In dynamic typing, types are associated with values not variables. Dynamically typed languages include Erlang, Groovy, JavaScript, Lisp, Lua, Objective-C, Perl (with respect to user-defined types but not built-in types), PHP, Prolog, Python, Ruby, Smalltalk and Tcl. Compared to static typing, dynamic typing can be more flexible (e.g. by allowing programs to generate types and functionality based on run-time data), though at the expense of fewer a priori guarantees. This is because a dynamically typed language accepts and attempts to execute some programs which may be ruled as invalid by a static type checker. The term 'dynamic language' means something different ('runtime dynamism') and a dynamic language is not necessarily dynamically typed.

Dynamic typing may result in runtime type errors—that is, at runtime, a value may have an unexpected type, and an operation nonsensical for that type is applied. This operation may occur long after the place where the programming mistake was made—that is, the place where the wrong type of data passed into a place it should not have. This may make the bug difficult to locate.

Dynamically typed language systems, compared to their statically typed cousins, make fewer "compile-time" checks on the source code (but will check, for example, that the program is syntactically correct). Run-time checks can potentially be more sophisticated, since they can use dynamic information as well as any information that was present during compilation. On the other hand, runtime checks only assert that conditions hold in a particular execution of the program, and these checks are repeated for every execution of the program.

Development in dynamically typed languages is often supported by programming practices such as unit testing. Testing is a key practice in professional software development, and is particularly important in dynamically typed languages. In practice, the testing done to ensure correct program operation can detect a much wider range of errors than static type-checking, but conversely cannot search as comprehensively for the errors that both testing and static type checking are able to detect. Testing can be incorporated into the software build cycle, in which case it can be thought of as a "compile-time" check, in that the program user will not have to manually run such tests.

Combinations of dynamic and static typing

The presence of static typing in a programming language does not necessarily imply the absence of all dynamic typing mechanisms. For example, Java, and various other object-oriented languages, while using static typing, require for certain operations (downcasting) the support of runtime type tests, a form of dynamic typing. See programming language for more discussion of the interactions between static and dynamic typing.

Static and dynamic type checking in practice

The choice between static and dynamic typing requires trade-offs.

Static typing can find type errors reliably at compile time. This should increase the reliability of the delivered program. However, programmers disagree over how commonly type errors occur, and thus what proportion of those bugs which are written would be caught by static typing. Static typing advocates believe programs are more reliable when they have been well type-checked, while dynamic typing advocates point to distributed code that has proven reliable and to small bug databases. The value of static typing, then, presumably increases as the strength of the type system is increased. Advocates of dependently typed languages such as Dependent ML and Epigram have suggested that almost all bugs can be considered type errors, if the types used in a program are properly declared by the programmer or correctly inferred by the compiler.[3]

Static typing usually results in compiled code that executes more quickly. When the compiler knows the exact data types that are in use, it can produce optimized machine code. Further, compilers for statically typed languages can find assembler shortcuts more easily. Some dynamically typed languages such as Common Lisp allow optional type declarations for optimization for this very reason. Static typing makes this pervasive. See optimization.

By contrast, dynamic typing may allow compilers to run more quickly and allow interpreters to dynamically load new code, since changes to source code in dynamically typed languages may result in less checking to perform and less code to revisit. This too may reduce the edit-compile-test-debug cycle.

Statically typed languages which lack type inference (such as Java and C) require that programmers declare the types they intend a method or function to use. This can serve as additional documentation for the program, which the compiler will not permit the programmer to ignore or permit to drift out of synchronization. However, a language can be statically typed without requiring type declarations (examples include Haskell, Scala and C#3.0), so this is not a necessary consequence of static typing.

Dynamic typing allows constructs that some static type checking would reject as illegal. For example, eval functions, which execute arbitrary data as code, become possible (however, the typing within that evaluated code might remain static). Furthermore, dynamic typing better accommodates transitional code and prototyping, such as allowing a placeholder data structure (mock object) to be transparently used in place of a full-fledged data structure (usually for the purposes of experimentation and testing). Recent enhancements to statically typed languages (e.g. Haskell Generalized algebraic data types) have allowed eval functions to be written in a statically type checked way.[4]

Dynamic typing typically makes metaprogramming more effective and easier to use. For example, C++ templates are typically more cumbersome to write than the equivalent Ruby or Python code.[citation needed] More advanced run-time constructs such as metaclasses and introspection are often more difficult to use in statically typed languages.


Strong and weak typing

One definition of strongly typed involves preventing success for an operation on arguments which have the wrong type. A C cast gone wrong exemplifies the problem of absent strong typing; if a programmer casts a value from one type to another in C, not only must the compiler allow the code at compile time, but the runtime must allow it as well. This may permit more compact and faster C code, but it can make debugging more difficult.

Some observers use the term memory-safe language (or just safe language) to describe languages that do not allow undefined operations to occur. For example, a memory-safe language will check array bounds, or else statically guarantee (i.e., at compile time before execution) that array accesses out of the array boundaries will cause compile-time and perhaps runtime errors.

Weak typing means that a language implicitly converts (or casts) types when used. Revisiting the previous example, we have:

var x := 5;    // (1)  (x is an integer)
var y := "37"; // (2) (y is a string)
x + y; // (3) (?)

In a weakly typed language, the result of this operation is not clear. Some languages, such as Visual Basic, would produce runnable code producing the result 42: the system would convert the string "37" into the number 37 to forcibly make sense of the operation. Other languages like JavaScript would produce the result "537": the system would convert the number 5 to the string "5" and then concatenate the two. In both Visual Basic and JavaScript, the resulting type is determined by rules that take both operands into consideration. In some languages, such as AppleScript, the type of the resulting value is determined by the type of the left-most operand only.

Safely and unsafely typed systems

A third way of categorizing the type system of a programming language uses the safety of typed operations and conversions. Computer scientists consider a language "type-safe" if it does not allow operations or conversions which lead to erroneous conditions.

var x := 5;     // (1)
var y := "37"; // (2)
var z := x + y; // (3)

In languages like Visual Basic variable z in the example acquires the value 42. While the programmer may or may not have intended this, the language defines the result specifically, and the program does not crash or assign an ill-defined value to z. In this respect, such languages are type-safe; however, if the value of y was a string that could not be converted to a number (eg "hello world"), the results would be undefined. Such languages are type-safe (in that they will not crash) but can easily produce undesirable results.

Now let us look at the same example in C:

int x = 5;
char y[] = "37";
char* z = x + y;

In this example z will point to a memory address five characters beyond y, equivalent to three characters after the terminating zero character of the string pointed to by y. The content of that location is undefined, and might lie outside addressable memory. The mere computation of such a pointer may result in undefined behavior (including the program crashing) according to C standards, and in typical systems dereferencing z at this point could cause the program to crash. We have a well-typed, but not memory-safe program — a condition that cannot occur in a type-safe language.

For more on type system refer to Type System

There was an error in this gadget