Featured Post

Research Proposal on Wetlands Essay Example

Exploration Proposal on Wetlands Essay A wetland is the domain which is secured with water occasionally or all the all year. A wetland is...

Tuesday, October 29, 2019

English 102 - 5 Annotated bibligraphy - The immigration policy in Research Paper

English 102 - 5 Annotated bibligraphy - The immigration policy in Alberta Canada complete as soon as possible - Research Paper Example (Boyd, Vickers 3) There were a much lower number of women than men for the first twenty years of the 20th century as well. Interestingly the statistics utilized are relatively thorough allowing for a much better idea of population numbers. The core approach of the article is specifically the numbers, races, ethnicities and sexes of the immigrants that have come to Canada over the past 100 years. The authors did a good job of ensuring little to no racial or sexual bias and presented the facts as they are available to them. Ten visual aids were used including charts; these visual aids were well cited from verifiable information sources. Some of those sources were Statistics Canada and the International Migration Review. Statistics Canada is a part of the Canadian Census Bureau. Given the intent of the essay desired this would seem to be a positive benefit as a source for the paper. With numerous references as well as the statistical backing of the Canadian Census Bureau it remains a va lid option as a source for the essay. Annotated Bibliography 2- McIsaac, E. "Nation Building through Cities: A new deal for immigrant settlement in Canada." Caledon Institute of Social Policy ISBN 1-55382-043-6. (2003): 1-13. Web. 20 Mar 2011. ... The idea that immigration in this way does not add to a broader strategy and lastly that the effect of the new strategy would further exclude and marginalize new immigrants. (McIsaac 2) The author goes into detail explaining each concern and its effect on Canada as a result. Given the focus of the article in regards to one particular policy towards immigrants it does pose a potential problem for use. This problem is not a serious issue however, and used properly it will allow for a much more thorough paper specifically through its look at this new policy approach and the reaction of the general public to it. It will be beneficial to include alternative viewpoints or to use this paper only as an additional point of explanation during the paper itself. Other than the singularity of the premise within the paper it is a well written and cited paper specifically focused on immigration and immigration policy in Canada. Annotated Bibliography 3 – Alberta Government,. "Supporting immi grants and immigration to Alberta an Overview." Alberta Government (ND): 1-16. Web. 20 Mar 2011. http://www.employment.alberta.ca/documents/WIA/WIA-IM_framework_overview.pdf This paper was printed as an information piece from the Alberta, Canada government. They are using the print media to advertise Alberta, Canada to legal immigrants. A three pronged strategy initiated with the Alberta government includes increasing the skills and knowledge levels of Albertans. Additionally they wish to facilitate the mobility of labor in Canada as well as increase the number of immigrants coming to Canada. (Alberta Government 2) They feel that by filling the needed job roles with able individuals they can promote a stronger internal economy which will benefit the people as well. The core

Sunday, October 27, 2019

The Map Generalization Capabilities Of Arcgis Information Technology Essay

The Map Generalization Capabilities Of Arcgis Information Technology Essay Data processing associated with Geographical Information Systems is so enormous. The information needed from this data actually varies for different applications. Specific details can be extracted, for instance resolution diminished, contours reduced, data redundancy eliminated or features on a map for which application is needed absorbed. This is all aimed at reducing storage space and representing details on a map with a larger scale accurately unto another with a much smaller scale. This paper presents a framework for the Map Generalization tools embedded in ArcGIS (A Geographical Information Systems Software by ESRI) as well as the algorithm each tool uses. Finally, a review of all the tools indicating which is more efficient after thorough analysis of the algorithm used and the desired output result produced. 1.0 Introduction 1.1 Definition of Map Generalization As (Goodchild, 1991) points out, Map Generalization is the ability to simplify and show spatial [features with location attached to them] relationships as it is seen on the earths surface modelled into a map. The advantages involved in adopting this process cannot be overemphasized. Some are itemized below (Lima dAlge J.C., 1998) It reduces complexity and the rigours Manual Cartographic Generalization goes through. It conveys information accurately. It preserves the spatial accuracy as drawn from the earths surface when modelling A lot of Software vendors came up with solutions to tackle the problem of manual cartography and this report will be reflecting on ArcGIS 9.3 Map Generalization tools. 1.2 Reasons for Automated Map Generalization In times past, to achieve this level of precision, the service of a skilled cartographer is needed. He is faced with the task of modelling [representation of features on the earths surface] on a large scale map into a smaller scale map. This form of manual cartography is very strenuous because it consumes a lot of time and also a lot of expertise is needed due to the fact that the cartographer will inevitably draw all the features and represent them in a smaller form and also taken into consideration the level of precision required so as not to render the data/graphical representation invalid. The setbacks experienced were the motivating factor for the advent or introduction to Automatic Cartographic Design which is known as Automated Map Generalization. A crucial part of map generalization is information abstraction and not necessarily to compress data. Good generalization technique should be intelligent which takes into consideration the characteristics of the image and not just the ideal geometric properties (Tinghua, 2004). Several algorithms [set of instructions taken to achieve a programming result] have been developed to enable this and this report is critically going to explore each of them 1.3 Process of Automated Map Generalization As Brassel and Weibel (n.d.) Map Generalization can be grouped into five steps. Structure Recognition Process Recognition Process Modelling Process Execution Display The step that will be elaborated upon for the cause of this report will be Process Recognition [types of Generalization procedures] which involves different manipulation on geometry in order to simplify the shape and represent it on a smaller scale (Shea and McMaster, 1989) 2.0 Generalization Tools in ArcGIS 9.3 2.1 Smooth Polygon This is a tool used for cartographic design in ArcGIS 9.3. It involves dividing the polygon into several vertices and each vertice being smoothed when the action is performed (FreePatentOnline, 2004-2010). An experiment is illustrated below to show how Smooth Polygon works. Add the layerfile Polygon which has an attribute name of Huntingdonshire-which is a district selected from England_dt_2001 area shapefile that was downloaded from UKBorders. The next step was I selected the ArcTool Box on the standard toolbar of ArcMap, then I went to Generalization Tools which is under Data Management Tools and afterwards I clicked on Smooth Polygon. Open Smooth Polygon > Select Input feature (which is polygon to be smoothed) in this case Polygon > select the output feature class (which is file location where the output image is to be saved) > select the simplification algorithm (which is PAEK) > select the simplification tolerance. Fig 2.0: Display before Smooth Polygon Fig 2.1: Display after Smooth Polygon The table in Fig 2.1 shows the output when Polynomial Approximation Exponential Kernel (Bodansky, et al, 2002) was used. The other algorithm that can be applied for this procedure is Bezier Interpolation. Algorithm Type Simplification Tolerance(Km) Time Taken (secs) PAEK 4 1 Bezier Interpolation 112 Observation PAEK Algorithm: When this technique was used, as the simplification tolerance value is increased, the weight of each point in the image decreased and the more the image is smoothed. Also, the output curves generated do not pass through the input line vertices however, the endpoints are retained. A significant short coming of PAEK Algorithm is that in a bid to smoothen some rough edges, it eliminates important boundaries, to refrain from such occurrence a buffer is to be applied to a zone of certain width before allowing the PAEK Smooth algorithm to execute. (Amelinckx, 2007) Bezier Interpolation: This is the other algorithm that can be applied to achieve Smoothing technique on polygons. In this case, the parameters are the same as PAEKs except that the tolerance value is greyed out- no value is to be inputed and as a result the output image produced is identical to its source because the tolerance value is responsible for smoothen rough edges and the higher value stated, the more the polygon is smoothed. The output curves passes through the input line vertices. When this experiment was performed, it was noticed that its curves were properly aligned around vertices. Conclusion: After performing both experiments, it was observed that the PAEK Algorithm is better because it allows a tolerance value to be inputted which in turn gives you a more smoothed image around curves and this will be of more importance to cartographers that want to smoothen their image and remove redundant points. 2.2 Smooth Line This is the second tool we will be examining. This is similar to Smooth Polygon technique except that the input feature will have to be a polyline shapefile. The steps are repeated as illustrated in Smooth Polygon but under Generalization Tools; Smooth Line is chosen. Now under input feature (select gower1) which is a dataset provided for use on this report. Specify the output feature > smoothing algorithm selected (PAEK) > smoothing tolerance. Note: All other fields are left as defaults i.e. No_check/Flag Error meaning we do not want it to display any errors if encountered and fixed_Endpoint/Not_fixed which preserves the endpoint of a polygon or line and applies to PAEK Algorithm. Algorithm Type Simplification Tolerance(Km) Time Taken (secs) PAEK 1000 2 Bezier Interpolation 4 Fig 2.2: Display after Smooth Line technique was applied __________ (Before Smoothing Line) __________ (After Smoothing Line) Observation PAEK Algorithm: The tolerance value used here was so high to be able to physically see the changes made. PAEK Algorithm as applied on gower1 smoothed the curves around edges and eliminates unimportant points around the edges. This results in an image with fewer points as the tolerance value is increased. The output line does not pass through the input line vertices. This algorithm uses a syntax where the average of all the points is taken and for a particular vertex, which is substituted with the average coordinates of the next vertex. This is done sequentially for each vertex but displacement of the shape is averted by giving priority to the weighting of the central point than that of its neighbouring vertex. Bezier Interpolation: Just like in Smoothing Polygon, a tolerance value is not required and when this technique was performed in this illustration, points around edges were partially retained resulting in drawing smooth curves around the vertices. The output line passes across the input line vertices. Conclusion: From both illustrations just as in Smooth Polygon, PAEK Algorithm was considered most effective because it generates smoother curves around the edges as the tolerance value is increased. However, the true shape of the image can be gradually lost as this value is increased but with Bezier Interpolation; curves around the vertices are preserved but just smoothed and vertices maintained to as well. Simplify Polygon: This method is aimed at removing awkward bends around vertices while preserving its shape. There are two algorithms involved; Point Remove and Bend Simplify. The shapefile used for this illustration is the polygon (Huntingdonshire) district of England. Select Simplify Polygon (under generalization tools, which is under Data Management tools > then input feature as polygon > output feature> simplification algorithm> smoothing tolerance. Algorithm Type Simplification Tolerance(Km) Time Taken (secs) Point Remove 2 4 Bend Simplify 2 9 Fig 2.3: Display before Simplify Polygon Fig 2.4: Display after Simplify Polygon Point Remove Algorithm: This is a metamorphosis of the Douglas-Peucker algorithm and it applies the area/perimeter quotient which was first used in Wang algorithm (Wang, 1999, cited in ESRI, 2007). From the above experiment, as the tolerance value is increased, more vertices in the polygon were eliminated. This technique simplifies the polygon by reducing lots of vertices and by so doing it loses the original shape as the tolerance value is increased gradually. Bend Simplify Algorithm: This algorithm was pioneered by Wang and Muller and it is aimed at simplifying shapes through detections around bent surfaces. It does this by eliminating insignificant vertices and the resultant output has better geometry preservation. Observation: After applying both algorithms to the polygon above, it was seen that for point remove, the vertices reduced dramatically as the tolerance value was increased in multiples of 2km. This amounts to about 95% reduction while when the same approach was applied to Bend Simplify; there was about 30% reduction in the number of vertices. Bend Simplify also took longer time to execute. Conclusion: It is seen that Bend Simplify is a better option when geometry is to be preserved however when the shape is to be represented on a smaller scale, point remove will be ideal because the shape is reduced significantly thereby appearing as a shrink image of its original. Simplify Line This is a similar procedure to Simplify Polygon except that here the shapefile to be considered is a line or a polygon which contains intersected lines. It is a process that involves reduction in the number of vertices that represent a line feature. This is achieved by reducing the number of vertices, preserving those that are more relevant and expunging those that are redundant such as repeated curves or area partitions without disrupting its original shape (Alves et al, 2010). Two layers are generated when this technique is performed; a line feature class and a point feature class. The former contains the simplified line while the latter contains vertices that have been simplified they can no longer be seen as a line but instead collapsed as a point. This applies to Simplify Polygon too. However, for both exercises no vertex was collapsed to a point feature. To illustrate this, the process is repeated in previous generalization technique, but under Data Management tools > select simplify line > select input feature (gower1) > select output feature > select the algorithm (point remove) > tolerance. Then accept all other defaults because we are not interested in the errors. Algorithm Type Simplification Tolerance(Km) Time Taken (secs) Point Remove 8 7 Bend Simplify 8 12 Fig 2.5: Display after Simplify Line __________ (Before Simplifying Line) __________ (After Simplifying Line) Two algorithms are necessary for performing this operation; Point Remove and Bend Simplify. Observation Point Remove Algorithm: This method has been enumerated in Simplify Polygon. It is observed here that when point remove algorithm was used the lines in gower1 were redrawn such that vertices that occurred redundantly were removed and this became even more evident as the tolerance value increased such that the line had sharp angles around curves and its initial geometry is gradually lost. Bend Simplify Algorithm: This also reduces the number of vertices in a line and the more the tolerance value was increased, the more the number of reduction in the vertices. It takes a longer time to execute than the Point Remove. However the originality of the line feature is preserved. Conclusion: From the two practical exercises, Bend Simplify algorithm is more accurate because it preserves the line feature and its original shape is not too distorted. However, if the feature is to be represented on a much smaller scale and data compression is the factor considered here, then Point Remove will be an option to embrace. Aggregate Polygon: This process involves amalgamating polygons of neighbouring boundaries. It merges separate polygons (both distinct ones and adjacent) and a new perimeter area is obtained which maintains the surface area of all the encompassing polygons that were merged together. To illustrate this, select Data Management Tools > select aggregate polygons > select input feature (which is a selection of several districts from the England_dt_2001 area shapefile I downloaded) > output feature class > aggregation distance (boundary distance between polygons) and then I left other values as default. Fig 2.6: Display before Aggregate Polygon Fig 2.7: Display after Aggregate Polygon Aggregation Distance Used 2km Time Taken 48secs As seen from both figures, the districts in Fig 2.6 were joined together as seen in fig 2.3. As the aggregation distance is increased further, the separate districts are over-merged and the resultant image appears like a plain wide surface area till those hollow parts seen in fig 2.7 disappears. The algorithm used here which is inbuilt into the arcgis software is the Sort Tile Recursive tree. This algorithm computes all the nodes of neighbouring polygons by implementing the middle transversal method in a logical sequence from left to right. When this computation is complete, the result is stored as a referenced node. Now the middle transversal node in the tree is obtained and thereafter a mergence is calculated which spans from the left node to the right node until it get to the root of the tree (Xie, 2010) 2.6 Simplify Building: This process simplifies polygon shapes in form of buildings with the aim of preserving its original structure. To illustrate this, Simplify Building is chosen under Data Management tools. The appropriate fields are chosen; input feature here is a building shape file I extracted from MasterMap download of area code CF37 1TW. a b c d Fig 2.8: Display before Simplify Building Fig 2.9: Display after Simplify Building As shown above, the buildings in (a and b) in fig 2.8 were simplified to (c and d) in fig 2.9 where a tolerance value of 10km was used and the time taken to execute this task was 3secs. As the tolerance value is increased, the more simplified the building is and it loses its shape. The algorithm behind this scene is the recursive approach which was first implemented with C++ programming language but has evolved into DLL (Dynamic Link Library) applications like ArcGIS 9.3 The recursive approach algorithm follows this sequence of steps. Determining the angle of rotation ÃŽÂ ± of the building, computing nodes around a boundary and then enclosing a small rectangular area which contains a set of points The angle of rotation ÃŽÂ ± is set Determining the vertices around edges as regards the recursion used and thereafter to calculate the splitting rate  µ and a recursive decomposition of the edge with respect to those of the new edges. The shortcoming of this algorithm is that L and Z shaped buildings are culprits as they give erroneous shapes while it works perfectly on U and L shaped buildings (Bayer, 2009). 2.7 Eliminate: This technique basically works on an input layer with a selection which can either take the form of Select by Location or Select by Attribute query. The resultant image now chunks off the selection and the remaining composites of the layerfile are now drawn out. To illustrate this, eliminate is chosen under data management tools, the input feature here is England_dt_2001 area shapefile which has some districts selected and the output feature is specified, all other fields left as defaults. From Fig 3.0 after eliminated procedure was taken on the polygon (the green highlights being the selected features), the resultant polygon is shown in Fig 3.1. However the districts in Fig 3.1 now excludes all those selected in Fig 3.0 and this can be seen visually in labels a and b and therefore Fig 3.1 has fewer districts. a b Fig 3.0: Display before Eliminate process Fig 3.1: Display after Eliminate process The time taken for this procedure was 44secs. 2.8 Dissolve: The dissolve tool works similarly to the aggregate polygon except that in dissolve, it is the features of the polygons that are to be aggregated and not the separate polygons themselves. The features are merged together using different statistic types more like an alias performed on them. To illustrate this, click on Dissolve under Data Management tool, select input features- same used for aggregate polygons (features to be aggregated) > the output field (where the result is to be saved) > the dissolve field (fields you want to aggregate together) > statistic type > multi_part > dissolve_lines. The diagram below shows this; Observation: For this exercise, the dissolve field was left as default meaning no field was selected. Also, multi_part was used which denotes that instead of merging smaller fields into a large one-the features becomes so extensive that if this is displayed on a map, there can be loss of performance however the multi_part option makes sure larger features are split into separate smaller ones. Dissolve_line field makes sure lines are dissolved into one feature while unsplit_lines only dissolve lines when two lines have an end node in common. The algorithm for this technique is simply Boolean (like a true or false situation, yes or no). However there are shortcomings with this technique as low virtual memory of the computer can limit the features that are to be dissolved. However, input features can be dissected into parts by an algorithm called adaptive tiling. Fig 3.2: Display before Dissolve process Fig 3.3: Display after Dissolve process Time taken = 10secs 2.9 Collapse Dual Lines: This is useful when centric lines are to be generated among two or more parallel lines with a specific width. This can be very useful when you have to consider large road networks in a block or casing. It enables you to visualize them properly. To illustrate this, open Collapse Dual Lines under data management tools > select input feature (which is gower1) > select the output feature > select maximum width Maximum width (this is the maximum width of the casing allowed that contains the feature to be collapsed e.g. width of a road network) while the minimum width is the minimum value allowed to be able to denote its centric line from. In this exercise, maximum width = 4km Time taken = 4secs Fig 3.4: Display after Collapse Dual Line to Centerline __________ (Before Collapse Dual Line) __________ (After Collapse Dual Line) As seen above, it is observed that when this experiment was performed, those lines in blue are aftermaths of effect of procedure of operation on them because they had a red colour before. However those in red did not change because they did not have a width within the specified maximum width stated. However, this is going to change as the maximum width is increased or a minimum width is set. 3.0 Conclusion From the illustrations shown in this paper, we can see that various forms of generalization tools have their various purposes either in form of shape retention, angular preservation or simply reduction purposes so that a replica image shown on a larger scale can fit in properly on a smaller scale. However depending on the tool chosen, a compromise will have to made on these factors giving preference to what it is we want to be represented after performing the operation. Different algorithms were explored and it is inferred that when polygons or lines are to be simplified, point remove is accurate option when you want to represent them on a smaller scale, however if originality of shape is to be considered then bend simplify algorithm will work best while for Smooth technique on polygons and lines, PAEK Algorithm is better.

Friday, October 25, 2019

The Ethics of Cloning :: Persuasive Essay, Argumentative

The Ethics of Cloning Regardless of what our future holds, it will be based on the decisions we make today.   Those decisions can be made using the Utilitarian Theory which states that we are doing good for the greatest number of people.   Using Rule Utilitarianism "which maintains that a behavioral code or rule is morally right if the consequences of adopting that rule are more favorable than unfavorable to everyone. (IEP)" is justifably noted that if a consensus is formed on the basis of rules that govern cloning, and these rules are broken, the appropriate punishment will result.   This is because cloning a human will not benefit the society as a whole, it would do more harm than good.   We all have rules that govern our society over what is right or wrong and we know that these rules are set forth to maintain order.   We have laws because it benefits the majority of the people.   Ã‚  Ã‚  Ã‚   Principles of Consequences state that when looking at the end result, the correct action will be the action that produces the greatest amount of happiness (Ursery).   To decide if human cloning produces the greatest amount of happiness we have one question still in need of an answer is "Are human embryos really human?" Well, the term 'human' proceeding the term 'embryo' should adequately answer the question. The embryo are cloned from human tissue, contain human DNA, thus there is likely a 100 percent chance that the embryos are indeed human, as opposed to being tadpole embryos. Therefore, biologically speaking a clone is no less a human than you or I. And using that human for tissue simply because he/she was cloned rather than conceived does not validate the notion, nor skip around the moral and ethical implications of taking the life of another human being.   Death is not a happy occasion therefore it does not produce the greatest amount of happines to the majo rity of the popluation.     Ã‚  Ã‚  Ã‚   The bad consequences out way the good, therefore we cannot assume that the benefit of human cloning will solve life's problems.   To this day we have yet to find a cure for the common cold.   This is because most diseases have a way of surviving, as did the human race during the ice age.   Everything finds a way to adapt to it's environment and if the benefits major benefit for cloning is to cure diseases, then we are at a loss.

Thursday, October 24, 2019

Haptic Technology Essay

Haptic is the â€Å"science of applying tactile sensation to human interaction with computers†. The sensation of touch is the brains most effective learning mechanism –more effective than seeing or hearing –which is why the new technology holds so much promise as a teaching tool. With this technology we can now sit down at a computer terminal and touch objects that exists on â€Å"mind† of the computer. By using special input/output devices (joysticks, data gloves or other devices),users can receive feedback from computer applications in the form of felt sensations in the hand or other parts of the body. In combination with a visual display, Haptic technology can be used to train people for tasks requiring hand- eye coordinatio , such as surgery and spaceship maneuvers. In our paper we have discussed the basic concepts behind haptics along with the haptic devices and how these devices are interacted to produce sense of touch and force feedback mechanisms. Then, we move on to a few applications of Haptic Technology. Finally we conclude by mentioning a few future developments. Introduction: Haptic technology, or haptics, is a tactile feedback technology which takes advantage of the sense of touch by applying forces,vibrations or motions to the user.This mechanical stimulation can be used to assist in the creation of virtual objects in a computer simulation, to control such virtual objects, and to enhance the remote control of machines and devices (telerobotics). It has been described as â€Å"doing for the sense of touch what computer graphics does for vision†. Haptic devices may incorporate tactile sensors that measure forces exerted by the user on the interface. Haptic technology has made it possible to investigate how the human sense of touch works by allowing the creation of carefully controlled haptic virtual objects. These objects are used to systematically probe human haptic capabilities, which would otherwise be difficult to achieve. These research tools contribute to the understanding of how touch and its underlying brain functions work. The word haptic, from the Greek á ¼â€¦Ãâ‚¬Ãâ€žÃŽ ¹ÃŽ ºÃÅ'Ï‚ (haptikos), means pertaining to the sense of touch and comes from the Greek verb á ¼â€¦Ãâ‚¬Ãâ€žÃŽ µÃÆ'ÃŽ ¸ÃŽ ±ÃŽ ¹haptesthai, meaning to contact or to touch. WHAT IS HAPTICS Haptics is Quite Literally The Science of Touch. The origin of the word haptics is the Greek haptikos, meaning able to grasp or perceive. Haptic sensations are created in consumer devices by actuators, or motors, which create a vibration. Those vibrations are managed and controlled by embedded software, and integrated into device user interfaces and applications via the embedded control software APIs. You’ve probably experienced haptics in many of the consumer devices that you use every day. The rumble effect in your console game controller and the reassuring touch vibration you receive on your smartphone dial pad are both examples of haptic effects. In the world of mobile devices, computers, consumer electronics, and digital devices and controls, meaningful haptic information is frequently limited or missing. For example, when dialing a number or entering text on a conventional touchscreen without haptics, users have no sense of whether they’ve successfully completed a task.With Immersion’s haptic technology, users feel the vibrating force or resistance as they push a virtual button, scroll through a list or encounter the end of a menu. In a video or mobile game with haptics, users can feel the gun recoil, the engine rev, or the crack of the bat meeting the ball. When simulating the placement of cardiac pacing leads, a user can feel the forces that would be encountered when navigating the leads through a beating heart, providing a more realistic experience of performing this procedure. Haptics can enhance the user experience through: * Improved Usability: By restoring the sense of touch to otherwise flat, cold surfaces, haptics creates fulfilling multi-modal experiences that improve usability by engaging touch, sight and sound. From the confidence a user receives through touch confirmation when selecting a virtual button to the contextual awareness they receive through haptics in a first person shooter game, haptics improves usability by more fully engaging the user’s senses. * Enhanced Realism: Haptics injects a sense of realism into user experiences by exciting the senses and allowing the user to feel the action and nuance of the application. This is particularly relevant in applications like games or simulation that rely on only visual and audio inputs. The inclusion of tactile feedback provides additional context that translates into a sense of realism for the user. * Restoration of Mechanical Feel: Today’s touchscreen-driven devices lack the physical feedback that humans frequently need to fully understand the context of their interactions. By providing users with intuitive and unmistakable tactile confirmation, haptics can create a more confident user experience and can also improve safety by overcoming distractions. This is especially important when audio or visual confirmation is insufficient, such as industrial applications, or applications that involve distractions, such as automotive navigation. HISTORY OF HAPTICS In the early 20th century, psychophysicists introduced the word haptic to label the subfield of their studies that addressed human touch-based perception and manipulation. In the 1970s and 1980s, significant research efforts in a completely different field,robotics also began to focus on manipulation and perception by touch. Initiallyconcerned with building autonomous robots, researchers soon found that building adexterous robotic hand was much more complex and subtle than their initial naive hopeshad suggested. In time these two communities, one that sought to understand the human hand and one that aspired to create devices with dexterity inspired by human abilities found fertile mutual interest in topics such as sensory design and processing, grasp control andmanipulation, object representation and haptic information encoding, and grammars for describing physical tasks. In the early 1990s a new usage of the word haptics began to emerge. The confluence of several emerging technologi es made virtualized haptics, or computer haptics possible. Much like computer graphics, computer haptics enables the display of simulated objectsto humans in an interactive manner. However, computer haptics uses a display technology through which objects can be physically palpated. Basic system configuration. Basically a haptic system consist of two parts namely the human part and the machine part. In the figure shown above, the human part (left) senses and controls the position of the hand, while the machine part (right) exerts forces from the hand to simulate contact with a virtual object. Also both the systems will be provided with necessary sensors, processors and actuators. In the case of the human system, nerve receptors performs sensing, brain performs processing and m-uscles performs actuation of the motion performed by the hand while in the case of the machine system, the above mentioned functions are performed by the encoders, computer and motors respectively. Haptic Information Basically the haptic information provided by the system will be the combination of (i)Tactile information and (ii) Kinesthetic information. Tactile information refers the information acquired by the sensors which are actually connected to the skin of the human body with a particular reference to the spatial distribution of pressure, or more generally, tractions, across the contact area .For example when we handle flexible materials like fabric and paper, we sense the pressure variation across the fingertip. This is actually a sort of tactile information .Tactile sensing is also the basis of complex perceptual tasks like medical palpation ,where physicians locate hidden anatomical structures and evaluate tissue properties using their hands. Kinesthetic information refers to the information acquired through the sensors in the joints. Interaction forces are normally perceived through a combination of these two information’s. Creation of Virtual environment (Virtual reality) Virtual reality is the technology which allows a user to interact with a computer-simulated environment, whether that environment is a simulation of the real world or an imaginary world. Most current virtual reality environments are primarily visual experiences, displayed either on a computer screen or through special or stereoscopic displays, but some simulations include additional sensory information, such as sound through speakers or headphones. Some advanced haptic systems now include tactile information, generally known as force feedback, in medical and gaming applications. Users can interact with a virtual environment or a virtual artifact (VA)either through the use of standard input devices such as a keyboard and mouse, or through multimodal devices such as a wired glove, the Polhemus boom arm, and omnidirectional treadmill. The simulated environment can be similar to the real world, for example, simulations for pilot or combat training, or it can differ significantly from reality, as in VR games. In practice, it is currently very difficult to create a high-fidelity virtual reality experience, due to largely technical limitations on processing power,image resolution and communication bandwidth. However, those limitations are expected to eventually be overcome as processor, imaging and data communication technologies become more powerful and cost-effective over time. Virtual Reality is often used to describe a wide variety of applications, commonly associated with its immersive, highly visual, 3D environments. The development of CAD software, graphics hardware acceleration, head mounted displays; database gloves and miniaturization have helped popularize the motion.The most successful use of virtual reality is generated 3-D simulators. The pilots use flight simulators. These flight simulators have designed just like cockpit of the airplanes or the helicopter. The screen in front of the pilot creates virtual environment and the trainers outside the simulators commands the simulator for adopt different modes. The pilots are trained to control the planes indifferent difficult situations and emergency landing. The simulator provides the environment. These simulators cost millions of dollars. Virtual environment The virtual reality games are also used almost in the same fashion. The player has to wear special gloves, headphones, goggles, full body wearing and special sensory input devices. The player feels that he is in the real environment. The special goggles have monitors to see. The environment changes according to the moments of the player. These games are very expensive. Haptic Feedback Virtual reality (VR) applications strive to simulate real or imaginary scenes with which users can interact and perceive the effects of their actions in real time. Ideally the user interacts with the simulation via all five senses. However, today’s typical VR applications rely on a smaller subset, typically vision, hearing, and more recently, touch. Figure below shows the structure of a VR application incorporating visual, auditory, and haptic feedback. Haptic Feedback Block Diagram The application’s main elements are:1) The simulation engine, responsible for computing the virtual environments Behaviour over time;2) Visual, auditory, and haptic rendering algorithms, which compute the virtual Environment’s graphic, sound, and force responses toward the user; and3) Transducers, which convert visual, audio, and force signals from the Computer into a form the operator can perceive. The human operator typically holds or wears the haptic interface device and perceives audiovisual feedback from audio (computer speakers, headphones, and so on) and visual displays (for example a computer screen or head-mounted display).Whereas audio and visual channels feature unidirectional information and energy flow (from the simulation engine toward the user), the haptic modality exchanges information and energy in two directions, from and toward the user. This bi-directionality is often referred to as the single most important feature of the haptic interaction modality. HAPTIC DEVICES A haptic device is the one that provides a physical interface between the user and the virtual environment by means of a computer. This can be done through an input/ output device that senses the body’s movement, such as joystick or data glove. By using haptic devices, the user can not only feed information to the computer but can also receive information from the computer in the form of a felt sensation on some part of the body. This is referred to as a haptic interface. These devices can be broadly classified into:- a)Virtual reality/ Tele-robotics based devices:- Exoskeletons and Stationary device, Gloves and wearable devices, Point-source and Specific task devices, Locomotion Interfaces b) Feedback devices:- Force feedback devices, Tactile displays Virtual reality/Tele-robotics based devices:- Exoskeletons and Stationary devices: The term exoskeleton refers to the hard outer shell that exists on many creatures. In a technical sense, the word refers to a system that covers the user or the user has to wear. Current haptic devices that are classified as exoskeletons are large and immobile systems that the user must attach him or her to. Gloves and wearable devices: These devices are smaller exoskeleton-like devices that are often, but not always, take the down by a large exoskeleton or other immobile devices. Since the goal of building a haptic system is to be able to immerse a user in the virtual or remote environment and it is important to provide a small remainder of the user’s actual environment as possible. The drawback of the wearable systems is that since weight and size of the devices are a concern, the systems will have more limited sets of capabilities. Point sources and specific task devices: This is a class of devices that are very specialized for performing a particular given task. Designing a device to perform a single type of task restricts the application of that device to a much smaller number of functions. However it allows the designer to focus the device to perform its task extremely well. These task devices have two general forms, single point of interface devices and specific task devices. Locomotion interface: An interesting application of haptic feedback is in the form of full body Force Feedback called locomotion interfaces. Locomotion interfaces are movement of force restrictiondevices in a confined space, simulating unrestrained mobility such as walking andrunning for virtual reality. These interfaces overcomes the limitations of using joysticks for maneuvering or whole body motion platforms, in which the user is seated and does not expend energy, and of room environments, where only short distances can betraversed. b) Feedback Devices:- Force feedback devices: Force feedback input devices are usually, but not exclusively, connected to computer systems and is designed to apply forces to simulate the sensation of weight andresistance in order to provide information to the user. As such, the feedback hardware represents a more sophisticated form of input/output devices, complementing others such as keyboards, mice or trackers. Input from the user in the form of hand, or other body segment whereas feedback from the computer or other device is in the form of hand, or other body segment whereas feedback from the computer or other device is in the form of force or position. These devices translate digital information into physical sensations Tactile display devices: Simulation task involving active exploration or delicate manipulation of a virtualenvironment require the addition of feedback data that presents an object’s surface geometry or texture. Such feedback is provided by tactile feedback systems or tactile display devices. Tactile systems differ from haptic systems in the scale of the forces being generated. While haptic interfaces will present the shape, weight or compliance of an object, tactile interfaces present the surface properties of an object such as the object’s surface texture. Tactile feedback applies sensation to the skin. c)COMMONLY USED HAPTIC INTERFACING DEVICES:- PHANTOM: It is a haptic interfacing device developed by a company named Sensible technologies. It is primarily used for providing a 3D touch to the virtual objects. This is a very high resolution 6 DOF device in which the user holds the end of a motor controlled jointed arm. It provides a programmable sense of touch that allows the user to feel the texture and shape of the virtual object with a very high degree of realism. One of its key features is that it can model free floating 3 dimensional objects. Cyber glove: The principle of a Cyber glove is simple. It consists of opposing the movement of the hand in the same way that an object squeezed between the fingers resists the movement of the latter. The glove must therefore be capable, in the absence of a real object, of recreating the forces applied by the object on the human hand with (1) the same intensity and (2) the same direction. These two conditions can be simplified by requiring the glove to apply a torque equal to the interphalangian joint. The solution that we have chosen uses a mechanical structure with three passive joints which, with the interphalangian joint, make up a flat four-bar closed-link mechanism. This solution use cables placed at the interior of the four-bar mechanism and following a trajectory identical to that used by the extensor tendons which, by nature, oppose the movement of the flexor tendons in order to harmonize the movement of the fingers. Among the advantages of this structure one can cite:- †¢Allows 4 dof for each fingers †¢Adapted to different size of the finger Located on the back of the hand †¢Apply different forces on each phalanx (The possibility of applying a lateral force on the fingertip by motorizing the abduction/adduction joint) †¢Measure finger angular flexion (The measure of the joint angles are Independent and can have a good resolution given the important paths travelled by the cables when the finger shut. Cyber glove Mechanism Mechanical structure of a Cyber glove: The glove is made up of five fingers and has 19 degrees of freedom 5 of which are passive. Each finger is made up of a passive abduction joint which links it to the base (palm) and to 9 rotoid joints which, with the three interphalangian joints, make up 3closed-link mechanism with four bar and 1 degree of freedom. The structure of the thumb is composed of only two closed-links, for 3 dof of which one is passive. The segments of the glove are made of aluminum and can withstand high charges; their total weight does not surpass 350 grams. The length of the segments is proportional to the length of the phalanxes. All of the joints are mounted on miniature ball bearings in order to reduce friction. Fig 3.4 Mechanical Structural of Cyber glove The mechanical structure offers two essential advantages: the first is the facility of adapting to different sizes of the human hand. We have also provided for lateraladjustment in order to adapt the interval between the fingers at the palm. The second advantage is the presence of physical stops in the structure which offer complete security to the operator. The force sensor is placed on the inside of a fixed support on the upper part of the phalanx. The sensor is made up of a steel strip on which a strain gauge was glued. The position sensor used to measure the cable displacement is incremental optical encoders offering an average theoretical resolution equal to 0.1 deg for the finger joints. Control of Cyber glove: The glove is controlled by 14 torque motors with continuous current which can develop a maximal torque equal to 1.4 Nm and a continuous torque equal to 0.12 Nm. On each motor we fix a pulley with an 8.5 mm radius onto which the cable is wound. The maximal force that the motor can exert on the cable is thus equal to 14.0 N, a value sufficient to ensure opposition to the movement of the finger. The electronic interface of the force feedback data glove is made of PC with several acquisition cards. The global scheme of the control is given in the figure shown below. One can distinguish two command loops: an internal loop which corresponds to a classic force control with constant gains and an external loop which integrates the model of distortion of the virtual object in contact with the fingers. In this schema the action of man on the position of the fingers joints is taken into consideration by the two control loops. Man is considered as a displacement generator while the glove is considered as a force generator Haptic Rendering: It is a process of applying forces to the user through a force-feedback device. Using haptic rendering, we can enable a user to touch, feel and manipulate virtual objects. Enhance a user’s experience in virtual environment. Haptic rendering is process of displaying synthetically generated 2D/3D haptic stimuli to the user. The haptic interface acts as a two-port system terminated on one side by the human operator and on the other side by the virtual environment. . Applications The addition of haptics to various applications of virtual reality and teleoperation opens exciting possibilities. Three example applications that have been pursued at our Touch Lab are summarized below. †¢ Medical Simulators: Just as flight simulators are used to train pilots, the multimodal virtual environment system we have developed is being used in developing virtual reality based needle procedures and surgical simulators that enable a medical trainee to see, touch, and manipulate realistic models of biological tissues and organs. The work involves the development of both instrumented hardware and software algorithms for real-time displays. An epidural injection simulator has already been tested by residents and experts in two hospitals. A minimally invasive surgery simulator is also being developed and includes (a) in vivo measurement of the mechanical properties tissues and organs, (b) development of a variety of real-time algorithms for the computation of tool-tissue force interactions and organ deformations, and (c) verification of the traning effectiveness of the simulator. This work is reviewed in [9]. . †¢ Collaborative Haptics: In another project, the use of haptics to improve humancomputer interaction as well as human-human interactions mediated by computers is being explored. A multimodal shared virtual environment system has been developed and experiments have been performed with human subjects to study the role of haptic feedback in collaborative tasks and whether haptic communication through force feedback can facilitate a sense of being and collaborating with a remote partner. Two scenarios, one in which the partners are in close proximity and the other in which they are separated by several thousand miles (transatlantic touch with collaborators in University College, London, [11]), have been demonstrated. †¢ Brain Machine Interfaces: In a collaborative project with Prof. Nicolelis of Duke University Medical School, we recently succeeded in controlling a robot in real-time using signals from about 100 neurons in the motor cortex of a monkey [12]. We demonstrated that this could be done not only with a robot within Duke, but also across the internet with a robot in our lab. This work opens a whole new paradigm for studying the sensorimotor functions in the Central Nervous System. In addition, a future application is the possibility of implanted brain-machine interfaces for paralyzed patients to control external devices such as smart prostheses, similar to pacemakers or cochlear implants. Given below are several more potential applications: †¢ Medicine: manipulating micro and macro robots for minimally invasive surgery; remote diagnosis for telemedicine; aids for the disabled such as haptic interfaces for the blind.   Ã¢â‚¬ ¢ Entertainment: video games and simulators that enable the user to feel and manipulate virtual solids, fluids, tools, and avatars.   Ã¢â‚¬ ¢ Education: giving students the feel of phenomena at nano, macro, or astronomical scales; â€Å"what if† scenarios for non-terrestrial physics; experiencing complex data sets. †¢ Industry: integration of haptics into CAD systems such that a designer can freely manipulate the mechanical components of an assembly in an immersive environment. †¢ Graphic Arts: virtual art exhibits, concert rooms, and museums in which the user can login remotely to play the musical instruments, and to touch and feel the haptic attributes of the displays; individual or co-operative virtual sculpturing across the internet APPLICATIONS, LIMITATION & FUTUREVISION MEDICINE Haptic interfaces for medical simulation may prove especially useful for training in minimally invasive procedures such as laparoscopy and interventional radiology, as well as for performing remote surgery. A particular advantage of this type of work is that surgeons can perform more operations of a similar type with less fatigue. It is well documented that a surgeon who performs more procedures of a given kind will have statistically better outcomes for his patients. Haptic interfaces are also used in rehabilitation. By using this technology a person can have exercise simulated and be used to rehabilitate somebody with injury. A Virtual Haptic Back (VHB) was successfully integrated in the curriculum at the Ohio University College of Osteopathic Medicine. Research indicates that VHB is a significant teaching aid in palpatory diagnosis (detection of medical problems via touch). The VHB simulates the contour and stiffness of human backs, which are palpated with two haptic interfaces (SensAble Technologies, PHANToM 3.0). Haptics have also been applied in the field of prosthetics and orthotics. Research has been underway to provide essential feedback from a prosthetic limb to its wearer. Several research projects through the US Department of Education and National Institutes of Health focused on this area. Recent work by Edward Colgate, Pravin Chaubey, and Allison Okamura et al. focused on investigating fundamental issues and determining effectiveness for rehabilitation. Video games Haptic feedback is commonly used in arcade games, especially racing video games. In 1976, Sega’s motorbike game Moto-Cross, also known as Fonz, was the first game to use haptic feedback which caused the handlebars to vibrate during a collision with another vehicle. Tatsumi’s TX-1 introduced force feedback to car driving games in 1983. Simple haptic devices are common in the form of game controllers, joysticks, and steering wheels. Early implementations were provided through optional components, such as the Nintendo 64controller’s Rumble Pak. Many newer generation console controllers and joysticks feature built in feedback devices, including Sony’s DualShock technology. Some automobile steering wheel controllers, for example, are programmed to provide a â€Å"feel† of the road. As the user makes a turn or accelerates, the steering wheel responds by resisting turns or slipping out of control. In 2007, Novint released the Falcon, the first consumer 3D touch device with high resolution three-dimensional force feedback; this allowed the haptic simulation of objects, textures, recoil, momentum, and the physical presence of objects in games. Personal computers In 2008, Apple’s MacBook and MacBook Pro started incorporating a â€Å"Tactile Touchpad† design with button functionality and haptic feedback incorporated into the tracking surface. Products such as the Synaptics ClickPad followed thereafter. Windows and Mac operating environments, will also benefit greatly from haptic interactions. Imagine being able to feel graphic buttons and receive force feedback as you depress a button. Mobile devices Tactile haptic feedback is becoming common in cellular devices. Handset manufacturers like LG and Motorola are including different types of haptic technologies in their devices; in most cases, this takes the form of vibration response to touch. The Nexus One features haptic feedback, according to their specifications. Nokia phone designers have perfected a tactile touch screen that makes on-screen buttons behave as if they were real buttons. When a user presses the button, he or she feels movement in and movement out. He also hears an audible click. Nokia engineers accomplished this by placing two small piezoelectric sensor pads under the screen and designing the screen soit could move slightly when pressed. Everything, movement and sound is synchronized perfectly to simulate real button manipulation. Robotics The Shadow Hand uses the sense of touch, pressure, and position to reproduce the strength, delicacy, and complexity of the human grip. The SDRH was developed by Richard Greenhill and his team of engineers in London as part of The Shadow Project, now known as the Shadow Robot Company, an ongoing research and development program whose goal is to complete the first convincing artificial humanoid. An early prototype can be seen in NASA’s collection of humanoid robots, or robonauts. The Shadow Hand has haptic sensors embedded in every joint and finger pad, which relay information to a central computer for processing and analysis. Carnegie Mellon University in Pennsylvania and Bielefeld University in Germany found The Shadow Hand to be an invaluable tool in advancing the understanding of haptic awareness, and in 2006 they were involved in related research. The first PHANTOM, which allows one to interact with objects in virtual reality through touch, was developed by Thomas Massie wh ile a student of Ken Salisbury at MIT. Future Applications: Future applications of haptic technology cover a wide spectrum of human interaction with technology. Current research focuses on the mastery of tactile interaction with holograms and distant objects, which if successful may result in applications and advancements in gaming, movies, manufacturing, medical, and other industries. The medical industry stands to gain from virtual and telepresence surgeries, which provide new options for medical care. The clothing retail industry could gain from haptic technology by allowing users to â€Å"feel† the texture of clothes for sale on the internet. Future advancements in haptic technology may create new industries that were previously not feasible or realistic. Future medical applications One currently developing medical innovation is a central workstation used by surgeons to perform operations remotely. Local nursing staff set up the machine and prepare the patient, and rather than travel to an operating room, the surgeon becomes a telepresence. This allows expert surgeons to operate from across the country, increasing availability of expert medical care. Haptic technology provides tactile and resistance feedback to surgeons as they operate the robotic device. As the surgeon makes an incision, they feel ligaments as if working directly on the patient. As of 2003, researchers at Stanford University were developing technology to simulate surgery for training purposes. Simulated operations allow surgeons and surgical students to practice and train more. Haptic technology aids in the simulation by creating a realistic environment of touch. Much like telepresence surgery, surgeons feel simulated ligaments, or the pressure of a virtual incision as if it were real. The researchers, led by J. Kenneth Salisbury Jr., professor of computer science and surgery, hope to be able to create realistic internal organs for the simulated surgeries, but Salisbury stated that the task will be difficult. The idea behind the research is that â€Å"just as commercial pilots train in flight simulators before they’re unleashed on real passengers, surgeons will be able to practice their first incisions without actually cutting anyone†. According to a Boston University paper published in The Lancet, â€Å"Noise-based devices, such as randomly vibrating insoles, could also ameliorate age-related impairments in balance control.† If effective, affordable haptic insoles were available, perhaps many injuries from falls in old age or due to illness-related balance-impairment could be avoided.

Wednesday, October 23, 2019

The Cause and Effect of Alcoholism

Alcoholism is a cause that haves numerous effects on people in the United States today. It’s defined as a condition that resulted in the continued consumptions of alcoholic’s beverages, despite health problems and negative social consequences. The symptoms of alcoholism vary from person to person, but the most common symptoms seen are changes in emotional state, behavior, or personality. Alcoholics may become angry and argumentative, and withdrawn or depressed. They may also feel more anxious, sad, tense, and confused. Alcoholism is a treatable disease and many treatment programs and approaches are available to support alcoholics who have decided to get help, but no medical cure is available. Regardless of how someone is diagnosed as alcohol dependent or how they came to realize they have a serious drinking problem, the first step to treatment is a sincere desire to get help. Alcoholics who are pressured into treatment by social pressure or forced to quit by circumstances rarely succeed in the long run. Next, I will like to discuss the causes, effects and consequences of Alcoholics. There are several possible causes of alcoholism and risk factors for the disease. Alcoholic liver disease usually occurs after years of excessive drinking. The longer you use alcohol and the more alcohol consumed, the greater the likelihood of developing liver disease. Acute alcoholic hepatitis can result from binge drinking. It may be life-threatening if severe. People who drink excessively can become malnourished because of the empty calories from alcohol, reduced appetite, and poor absorption of nutrients in the intestines. Malnutrition contributes to liver disease. These are many causes that come from drinking a constant amount of Alcohol. The effects that alcohol has on the human body range from short to long term symptoms. As a person consumes alcoholic drinks the stomach immediately absorbs the alcohol and it enters into the bloodstream. Depending on the features such as the age, weight, sex, and body size of an individual the alcohol will affect people in many different ways. Some of the lighter effects of the intake of alcohol comprise lightheadedness, while other effects with an increased amount of alcohol consumed include queasiness, vomiting, slurred speech and vision, and an increased amount of dizziness. There are many consequences of drinking that can lead to an addiction commonly known as alcoholism. Permanent long term effects of consumption can lead to severe damage to essential organs as the liver and brain. If a sustained period of no consumption happens, many effects such as anxiety, delusion, and shuddering may occur. Drinking alcohol during pregnancy may lead to birth defects in infants commonly known as fetal alcohol syndrome. Retardation and permanent physical deformities are common in many cases, and investigative studies have shown that offspring of alcoholic parents are at a great deal higher possibility of becoming alcoholics themselves. In conclusion, there are several causes, effects, and consequences of Alcoholism that I have mentioned. Many people that use alcohol do not understand how harmful it is to their body. After reading my essay I hope you have a better understanding of why drinking to much alcohol is bad for you.

Tuesday, October 22, 2019

Interpretation and judgement in news reporting. The WritePass Journal

Interpretation and judgement in news reporting. INTRODUCTION Interpretation and judgement in news reporting. INTRODUCTION  MEDIA REPRESENTATIONINTERPRETATION AND JUDGEMENT IN NEWS REPORTINGTHE MEDIA AND SOCIAL RESPONSIBILITYPATTERNS OF CRISIS REPORTINGCRISIS COVERAGE AS CRISIS MANAGEMENTThe Origin and Nature of the Nigerian PressTHE NIGERIAN MEDIA AND NATIONAL SECURITYConclusionRelated INTRODUCTION In this chapter I will undertake a review of theories relevant to the theme of this work. Various scholarly positions on the theory of media representation, media and social responsibility and pattern of crisis reporting will be thoroughly examined. I will equally review scholarly works on the origin and nature of the Nigerian press.   MEDIA REPRESENTATION The media in any society serve as the window through which the wider world is viewed. They give and account of reality but not the reality in the real sense. Positions of various scholars in the field of media studies reveal that what we read, hear or watch on the media is representation of reality and as such, the media have the ability to and actually do construct the reality through their coverage and reportage of events. The knowledge and perception of people about events, issues and objects within and beyond their geographical settings are usually formed and shaped by media representation of such events, issues and objects. The idea that the media utilize language, semiotic and visual images to construct realities has been extensively written and researched in various works and among various scholars in the field of media and communication studies. While some scholars have espoused cultural views of media representation (Hall, 1997) others have adopted the notion of race (O†™Shaughnessy 1997, Ferguson 2002, Acosta-Alzure 2003) language, and identity (Rayner 2001). To Hall (1997, p. 17) â€Å"Representation is the production of the meaning of the concepts in our mind through language and it is the link between concepts that enables us to refer to either the real world of objects, people or events†¦Ã¢â‚¬ . The concept of representation according to Hall (ibid) entails â€Å"using language to say something meaningful about or to represent the world meaningfully to other people†¦it is an essential part of the process by which meaning is produced and exchanged between members of a culture†. Hall describes representation as a phenomenon that involves the use of language, signs and images to symbolise and represent objects. The use of language in cultural studies can be reflective when it reflects the existing meaning of an object, intentional when it reflects the personally intended meaning and constructionist when meaning is constructed through the use of language (Hall, 1997). Hall (1997, p.15) examines the concepts of representation in terms of the â€Å"circuit of culture† which implies that representation, as a concept in cultural studies â€Å"connects meaning and language to culture† The media utilize a great deal of images, signs and language to describe and report events or objects to their audiences and their use of such elements serve as the basis upon which the knowledge and perception of audiences about the objects and events being reported rest. Representation therefore dwells on how the media create meaning and form knowledge through the use of language and visual images. In their view, Acosta Alzuru and Roushanzamir (2003, p.47) assert that â€Å"Representation constructs meaning by connecting the world language and live experiences. By performing these connections representation does not reflect the frame of the world but that it constitutes the world†. In their view, Rayner et al (2001, p.63) describe representation as â€Å"the process by which the media present to us the real world†. They further assert that â€Å"there is a wide philosophical debate about what constitutes ‘reality’ and whether, in fact, reality ultimately exists. If however, we assume, for the convenience of looking at representation, that there is an external reality, then, one key function of the media is to represent that reality to us, the audience†. One issue central to various postulations of scholars on media representation is the inability of the media to reproduce the exact real word. News generally is an account of reality, not reality itself, thus most media organizations and journalists often fall prey of adding their interpretations and judgment to certain news stories with a view to creating meaning. INTERPRETATION AND JUDGEMENT IN NEWS REPORTING In reporting and presenting issues, media often add their own judgment and interpretations thereby defining the public knowledge of certain events. On the other hand, audiences also subject media messages to some interpretations which explain why they are of the view that media bias is possible in their reporting of events. According to Hawk (1992, p.1) â€Å"there are no such things as facts without interpretation†. This assertion is supported by Said (1981, p.154) as he succinctly observes that: â€Å"All knowledge that is about human society and not about natural world is historical knowledge and therefore rests upon judgment and interpretation. This is not to say that facts and data are non-existent but that facts get their importance from what is made upon interpretation†. In their coverage and reportage of events media therefore give their meaning and identify for readers those events that are considered important. Relating these assertions to the Nigerian press representation of Niger-Delta Crisis, it is evident that media tend to give meaning and interpretation to the activities of the Niger-Delta militants visvis government reactions and perception of the general public. Based on the argument and counter argument between African and non-African analysts on the western media coverage of Africa, especially in the area of media subjecting their reports to judgement and interpretations, scholars have emphasized the need for news analysis. In his work â€Å"Islam and the West in the Mass Media, Hafez (2000) points out that international news coverage can be analysed by focusing on the textual patterns, linguistic feature, as well as the arrangement of facts, arguments and frames in foreign reporting to understand whether or not such report is based on objectivity or sensationalism (p.27).   Empirical evidences based on existing views of various scholars reveal that in understanding the causes and effects of media coverage, it is important to examine the individual perception of the journalists and the orientation of the mass media in relation to the object being reported. As argued by Falola (2000, p.30), â€Å"most foreign media use certain stereotyp es and images to represent African states as epitome of   vampirical authoritarian governance, parasitical political elites, fierce religious and tribal animosities and endemic sickness and misery†. Having examined the theory of media representation visvis the discourse of media interpretation and judgements in news reporting, I proceed to discuss the media representation of Africa within the context of the theory media representation. THE MEDIA AND SOCIAL RESPONSIBILITY The social responsibility theory is based on the notion that the media must perform its role bearing in mind â€Å"public interest†. McQuails (2005:164) rightly observes that the concept of public interest is simple yet fraught with many disconnected views about what it entails or should entail. In Nigeria, for instance, the issue of resources control has been a subject for many debates and cause for protracted conflict. What would constitute â€Å"public interest?† Should the press promote the position of the proponents of resource control, or should it support those who say every State should share in equal measure from the nation’s oil wealth? McQuail, however quickly clears the fog by stating that the mass media must operate by the same principles that govern other units of society; principles which are justice, fairness, democracy, and prevailing notions of desirable social and cultural values. Any practice in society that undermines these principles singly or collectively constitutes sabotage of â€Å"public interest† and may correctly suffer report from the media. Further, McQuail identifies the factor that may affect the promotion of public interest which he defines in terms of cultural, political, professional and commercial interests. On culture-induced effects, there is the institutional entrenchment of a culture of apathy and distrust for the people of other tribes or ethnic groups. The Nigerian society’s penchant for religious and ethnic conflicts is an unfortunate testimony to this fact. And since the News must carry the stories, including that of casualties, there is the tendency for reporting to cause an escalation of the crisis. Liebes and Kampf   (2004:79)   captured it this way: â€Å"†¦.whereas politicians and representatives of the elite are free to address the media of any time (crossing the threshold through the â€Å"front door†), the only chance of radical groups to invade the screen is via the â€Å"back door†, that is, by the use of violence†¦the more violence they created, the greater the chance of crossing into the screen and being viewed by the public. The chance, however, is also greater for the coverage to be more negative, and therefore acts as a boomerang†. The political inhibition to â€Å"public interest† reporting may play out in the bias of the practicing journalist who might have a stake in the issues for which the group is agitating. How does a journalist from Niger Delta maintain neutrality on the issue of resource distribution and control when it has such profound effects on his life and that of his family? Or how does a journalist from Katsina State maintain neutrality when the ceding of resource control to the generating states means that his state’s allocation may be highly reduced. Beard (2000, p18) is of the position that â€Å"to expect that a political journalist or politician can tell the truth is problematic, because such an expectation fails to take account of the fact that both the creator and the receiver of the text bring ideological values to it†. He explains further that reporting capitalizes on certain language forms such as metaphors, metonymies, analogies and transitive, to show subtle or blatant sympathy for or apathy to various ideological positions extant in society (Beard, 2000:25) However, Keeble (2005:269) advocates for journalism practice that is found on universal principles of honesty, fairness, respect for the privacy, the avoidance of discrimination and conflict of interest. But he also correctly observes that â€Å"cultures and political systems around the globe throw up very different ethical challenges for journalists.† It is difficult to maintain neutrality in the face of threats, especially when such threats reach the point of fatality (Hartley 1982, p84; Tumber, 2004, p199), but the universal ideas require a reach toward neutrality and objectivity. Another factor that affects the responsibility of the media to the society is low level of professionalism.Professionalism may be seen as a commitment to the highest standard of excellence in the practice of journalism. It is a combination of the finest skills with the highest ethical conduct. This ideal contrasts sharply with the prevailing shallow approach to coverage and analysis of issues of public interest as seen in sections of the Nigerian media. The rate of unemployment and the abysmal state of corruption and nepotism have created an opportunity for unqualified individuals to practice journalism. The result, as Gujbawu (2002, p71) rightly observes, is the press’ increasing penchant for being a mouth piece for the ruling elite, and at the expense of society; a tendency for writing media content that misinforms, misleads, confuses and destroys society. In view of this, a classic work on theories of mass media has shown what many media problems are attributable to the edu cation of reporters and editors and poor preparation before undertaking assignments. Observable errors of fact may lead to questioning the authenticity of an entire report, which further brings to question the credibility of the media as dependable custodians of public conscience (Severin Tankard, 2001, p314-5). Another factor identified by McQuail (2005, p164) as the bane of â€Å"public interest† journalism is commercialism. Scholars agree that there is an increasing tendency toward monopolizing the media into the hands of a few rich business and media moguls (Dominick 1994, p109; Aufdeheide, 2004, p333 Stevenson, 2005 p40; Harrison, 2006, p164). These investors are engaged in stiff competition for market share with attendant repercussion. As noted by Folarin (1999, p27), the commercialists press â€Å"worships at the altar of profit and consumerism which often vitiate the ideals of social responsibility.† The profit motif makes the media vulnerable to the ideologies of big advertisers while consumerism lowers values since the media must give the public what it wants. Under this circumstance, commercial interests precedence over public good. Albeit, the social responsibility theory holds that the while the press must be free, it must also be adequate or responsible. The basic tenets of the socially responsible press, following the recommendations of the Robert Hutchins Commission of 1947, are thus outlined in (Severin and Tankard, 2001 p314; McQuails, 2005:171): A socially responsible press should provide a full, truthful, comprehensive and intelligent account of the day’s events in a context which gives them meaning. It should serve as a forum for the exchange of comments and criticism as a common carrier of the public expression, raising conflict to the plane of public discourse. A socially responsible press should give a representative picture of constituent groups in the society while presenting the goals and values of society, issues that have relevance to the well-being of the local community. A press with this kind of orientation is what is needed in a crisis –prone, or crisis –ridden society. Coverage of crisis in Nigeria requires that the media be truthful, comprehensive and balanced, representing the views and interests of the constituent groups in the federal state that it is. PATTERNS OF CRISIS REPORTING Pattern of reporting is a description of the differences in the reportage of news stories resulting from the different perspectives from which people view events. The patterns could be intrinsic or extrinsic, rather than being opposites, they are simply two sides of the same coin. Intrinsic patterns are the latent patterns that reflect the peculiarity of a paper, those features that differentiate one paper from others. These features are manifested in the language and the point of view that a paper expresses. It is seen in the way a paper challenges or reinforces certain stereotypes; the overt political position a paper adopts or discards (McNair, 2005:35). As Curran (2002:34) would suggest, the location of a news story within the frame of reference of a political position, by attribution, is a subtle way by which journalism advances one political opinion against another. On the other hand is the extrinsic pattern which is the obvious physical characteristics of a news report as it appears in the paper. This is marked by such features as the choice of a front-page story. The choice of a front-page story reveals the level of importance a newspaper ascribes to a story as against other stories. It is also manifest in the amount of space given to a story. A story that is considered as important will have depth of discussion, attributions, background information; a detailed description of the events and persons in the story. Also, an important story in the news is marked by extensive non-news editorial commentaries in the form of features, letters to the editor, opinion articles, and brazen editorials by the paper. This is where societal views are extracted and harnessed to set further agenda for public discourse to provide ideas for policy makers. Meanwhile, there are certain features that characterize crisis stories. One is that a crisis naturally commands prominence. In any crisis the suffering of the victim usually engages sympathy. This human interest factor makes the story popular, thus giving it prominence.   The other factor is drama. Simply put, drama is action, deed or performance that interest people presented on a stage or theater. In this case, the stage for the drama in a social crisis is the public sphere (Abcaran Klotz, 2002:19). Drama in the news describes the day to day actions that occur in human societies, actions that are considered worthy of mediation. The crisis story is typically drama-laden. Crisis reporting captures the intrigues, blackmails, betrayal, protests, etc., that happen in man’s experience. Furthermore, the crisis story has conflict – the inability of players in the social sphere to reach consensus on issues of ideology, personal or group interest, and opinion. This may degene rate into violence, often of fatal dimension (Veer, 2004:9). The interest is heightened by the impact of the conflict on human life and property. CRISIS COVERAGE AS CRISIS MANAGEMENT So far I have used the terms ‘crisis and ‘conflict’ interchangeably. The Chambers English Dictionary has defined crisis as â€Å"a crucial or decisive moment †¦.a time of difficulty and distress†, while conflict is described as â€Å"an unfortunate coincidence or opposition; violent collision†, some synonyms provided are â€Å"to fight; to contend; to be in opposition†. Conflict may be an overflow of crises. As it occurs in the Niger Delta, we may see a crisis from ethnic, political or economic dimensions, occurring hardly mutually exclusively, and manifesting in the form of protests, walkouts, strikes and often such violent expressions as killing, maiming, shooting, and kidnapping on which the study is focused. Simply put, conflict, as manifested at the community level in the Niger Delta, is the expression of disaffection and outburst of tension built up over time, due to denied or subverted expectations. Conflicts may be violent or non-violent. Reporting crisis takes different forms depending on the nature of the society in terms of its social structures and ethnic composition i.e. homogenous, plural, or multi-cultural societies. Owens-Ibie (2002, p33) citing Corbett, (1992) shows that â€Å"media in homogenous societies, characterized by an inclination toward consensus, tend to air conflict less than those in plural societies. Owen –Ibie goes on to state that Nigeria as a heterogeneous society tends to play out this trend. The media in the country is a terrain for airing conflict, and such coverage is a reflection of the socio-cultural and other diversities that the country typifies†. This statement cannot be untrue if weighed against the historical background of the Nigerian state, which comprise different ethnic nationalities fused against their wishes by the colonial explorers, a contrivance in mischief (Isoumonach and Gaskia, 2001, p55). This history has therefore been characterized by the constant strive for relevance and self-determination by each component of the amalgamation, especially the so – called minority groups. Expectedly, the media assumes a center state in these agitations, a hegemonic stance at that. Hartley (2002:99) explains that: â€Å"The crucial aspect of the notion of hegemony is not that it operates by forcing people against their will or better judgment to concede power to the already powerful, but that it works by winning consent to ways of making sense of the world that do in fact make sense†¦..the concept is used to show how everyday meanings, representations and activities are organized and made sense of in such a way as to render the interests of a dominant ‘bloc’ into an apparently natural and unarguable general interest, with a claim of everyone†. Two basic approaches for assuming hegemonic control quickly come to the fore. One is the media approach; the other is the people approach. With particular reference to the Niger Delta, what Curran (2002:150) refers to as ‘dominant discourse’ finds a fitting application in the agitations of the Niger Delta people. There has been a determined resolve to keep the media (and every occasion that promises media attention) awash with messages on resource control, fiscal federalism and equal rights to national political leadership. The expected outcome is to allow national and global attention, to the plight of Niger Delta people in the Nigerian state. The people approach is exploited when non-elite groups constitute themselves into â€Å"organizations† which are used as sources of news and comment by the media. While non-elite group, have in general restricted access to the media, this can be modified through improvements in organization (Curran, 2002 p152-153). Although this modification has come to be in the negative sense, the organization of various pressure groups and even militia forces has brought much media attention to the course of the Niger Delta in an unprecedented state. It is true that media coverage tends to favor the elite, official position. As this work shows, the news is most times written from the official stand point. By its very nature, the official is furnished with paraphernalia of office that guarantees that he makes a statement on a particular issue either in person or by proxy. The Nigerian President, for instance, has a Special Assistance for Media and Publicity, Special Adviser for Media and Pub licity and host of other officials; not counting that the services of the entire Ministry of Information and National Orientation and its quasi-organizations which include the Radio and TV networks, are at his disposal. It is therefore an onerous task for the other parties in the Niger Delta to beat this communicative advantage. Should the media then give a voice only to the elite party to the exclusion of the other? This model shows that crisis management should be in three phases. The first phase or pre-crisis phase is the time when a crisis is anticipated. Having established that in a plural, multi-cultural state like Nigeria is conflict prone, the press should always anticipate crisis by observing the signals that portend disturbance in social equation. Then the media must provide such coverage as will help to nip the crisis in the bud. The media should identify, expose, educate and enlighten citizen on those things, persons, or policies that constitute a threat to national secu rity (Odunlanmi, 1999, p132; Galadima 2002:P62). The next phase will be the in-crisis stage, when a nation is facing a condition of distress. Galadima (2002, p60-62) presents the atmosphere that may characterize conflict reporting. First is that reporting advertently or inadvertently gives publicity to the crisis. Reporting tends to win appreciation or engender resentment by the different parties involved. This is because certain interests are either being protected or subverted if reporting is seen as biased, it could precipitate very unwelcomed reactions. The Nigerian experience shows that the parties that are not favored by a report may descend into unleashing terror on the reporter or the organization he/she represents, and even unworthy members of the society. Thirdly, reported violence in a conflict, especially casualty figures could lead to more violence. Nigeria is also a typical illustration of this. Whenever killing is reported, it usually precipitates reprisal attacks elsewhere. Fourthly, it should be noted that each party in the dispute wants to have a voice through the media from where they can air their subjective opinions on the issue. The media must not become or be seen as a horn speaker for either of the parties, as that would not be without grave consequences. Then we have the Post-Crisis stage. The media must determine, suggest and promote through editorial and commentaries, what â€Å"strategies and policies can be developed [and deployed] to prevent similar or related crisis† (Ajala 2001:180). There should be a continual emphasis on those issue that guarantee peace, justice, equity and mutual coexistence, while denouncing those that cause disaffection, frustration and distress in the system. If these steps are observed, the media would be a veritable tool for, not just crisis reporting; but crisis management through reporting. The Origin and Nature of the Nigerian Press Nigerian Media historians generally agree that the Nigerian Press has a Christian missionary origin. Goaded by the motive â€Å"to excite the intelligence of the people†¦and get them to read†, Henry Townsend established the Iwe Iroyin in 1859 (Duyile 1987 cited by Mohamed 2003:19). Shortly, after the establishment of this mission –oriented press, the nationality press came on stream. The primary objective of this era was to attack, decimate and summarily expel the British imperialists. It was hostile to the British colonial administration. The press in this era championed the liberation struggle, agitating for sovereignty and self-governance. It had a nationalist (not a nationality) focus. This era technically ended on September 30, 1960 (Ajuluchukwu 2000:14). Subsequently, the press had the task of engineering a new state and guiding its evolution into a viable venture. Ajuluchukwu (2000:42) speaks of the journalism of this post-independence era in this wise: â€Å"For our professional journalists, the transition experience (from colonial to civil rule) proved sickeningly tortuous, mainly because they apparently failed to be reconciled with the fact that the emergent democratic government of independent Nigeria was not an extension of the preceding imperialist despotism. In that lingering frame of mind, the press remained hostile to the government of indigenous Nigerians as they were to the expelled British Regime. It was as though the media in the First Republic regarded our independent federal administration as a government neither of the people nor by the people and not for the people. The independent print media of the period demonstrated a clear unwillingness to give a blanket support to the government† It is important to note the emphasis on independent media. Contrary to the independent editorial stance of private-owned media, the earlier established organsiations of the leading politicians of the three major regions – Eastern, Western, and Northern – were heavily partisan promoting the interest of the regions that had founded them. Mohammed (2003 p33-34) provides insight into the implications of this on the place and role of the press in this era: â€Å"In the Northern Region, such media establishments as the Hausa language publication Gaskiya Tafi Kwabo established in 1948, and remained New Nigeria in 1966; and Radio Television Kaduna, established in 1962†¦the Western Nigerian Television founded in 1959; the tribune group of newspapers, founded in 1951 by Chief Obafemi Awolowo; Sketch Newspapers established in 1964; Dr Nnamdi Azikiwe’s West African Pilot founded in 1937 and its chain of publications, in addition to the Eastern Nigerian Television established in 1960†¦The attainment of independence in 1960 and the devolution of power of the petty bourgeois politicians through the three major political parties (National People’s Congress, based in and serving the North, National Council for Nigerian and Cameroon in the east and Action Group in the west)†¦were to impact on the of the mass media in post – colonial Nigeria. Although they were once united in ‘fighting’ the colonial impostors, they became divided, serving partisan, ethnic and sectional interests. This may be regarded as the beginning of the nationality press in Nigeria. Currently, there exist in the Niger Delta streams of community-based newspapers that seek to foster the Niger Delta agenda. Most of them, based in Port Harcourt, a city which, for some strategic political and socio –economic reasons may be regarded as the defacto headquarters of the Niger Delta. Some of these papers include Argus, Hard Truth, and The Beacon, among others. Appearing in the tabloid form, most of them circulate on weekly basis. Most also have their circulation scope limited to Port Harcourt, but are no less effective in shaping the opinion of the people and presenting their position on issues plaguing the oil-rich area. It is important to state that the press in the Niger Delta will make an elaborate subject for another research. THE NIGERIAN MEDIA AND NATIONAL SECURITY There are two positions on what constitutes national security-the militarist perspective and the developmental perspective. The militarist perspective locates national security on the ability of a nation to deter attack or defeat it (Lippman cited in Odunlanmi, 1999 p.128). Here national security is seen as the protection of the territorial integrity of a nation by military might. Therefore, a nation should develop the necessary weaponry to curtail and prevent the invasion of her territory by enemy forces and ensure that her citizens enjoy physical freedom, political independence and that their minimum core values are protected (Odunlanmi, 1999:128). On the other hand, the developmental perspective sees national security beyond territorial security of a nation or physical safety of her citizens. As observed by Nweke (1988) : â€Å"There is no doubt that national security embodies the sovereignty of the state, the inviolability of its national boundaries, and the right to individual and collective self-defense against internal threat. But the state is secure only when the aggregate of people organized under it has the consciousness of belongings to a common sovereign political community; enjoy equal political freedom, human rights, economic opportunities, and when the state itself is able to ensure independence in its development and foreign policy† cited in Odunlanmi (1999 p129). Alli (2001 p201) agrees with this thought by advancing that security should be all-embracing and may include: ‘personal security and freedom from danger and crime’; ‘freedom from fear and anxiety’; ‘freedom from disease’ and ‘a general feeling of well-being’. Thus the people in a state must not just be said to have access or means of economic self-reliance, political participation, respect for basic human rights and dignity; they must be seen to enjoy these benefits. They must be seen to be sufficiently empowered to access and enjoy good food, good shelter, equal rights to political participations, right to freedom of expression and civil decent and other basic rights. Conclusion One of the basic causes of conflict in any society is the lack of free flow of communication. Each segment of society needs an outlet to vent the feelings and opinion on issues of the day. Sewant (2000 p20) speaks of civil institutions in society which are â€Å"uncommitted to any political party or ideology†. These institutions may be educational, religious, literary and cultural, sport, financial and economic, or social welfare. â€Å"These institutions†, he says, â€Å"occupy spaces in the social life not covered by the political institutions. There is a competition and even rivalry between the political and the civil institutions need a voice through the media.† Clearly, the media must provide a platform for civil discourse and dialogue in which people must air their views on matters that concern them. When opinions are suppressed, emotions repressed, and views ignored, the result may be a state of anarchy, whose perpetrators may want to excuse on the unavailability of â€Å"option[s] other than when opinions anxious to voice their own idealistic, even altruistic, goals† (Whittaker, 2004:3). Alli (2001:201) explains that â€Å"in a heterogeneous society like Nigeria, suppressed opinion is unhealthy to the foundation of state, it [breads] discontent and violent expression†. In his work on ‘the capacity of the media for social mobilizations’, Folarin (2000, p104) observes that â€Å"media’s potential to counter threats to stability, minimize panic and anxiety and maintain cultural and political consensus†. By simply giving people the opportunity to talk, a lot of problems may be avoided, curtailed or solved. The media must provide this opportunity. â€Å"When the media represents and speaks on behalf of all sections of the society, particularly the voiceless, it gives meaning to democracy as a truly representative regime† (Sewant, 2000:25). Secondly, the media have capacity to champion polices that encourage better living condition by promoting accountability, responsible leadership and good governance on the part of leaders. At the same time, should be on the vanguard of campaigns against any policies or actions that undermine national security. The media provides a platform for debates on public policies, so that both the rulers and the ruled have the opportunity to make inputs, the effect of which are far-reaching in strengthening democratic structures and guaranteeing national security. This is the correction role of the media. Further, programming in the media should also address the need for citizenship and cultural education, so that in a plural society, like Nigeria, one segment of the polity is able to understand, appreciate and respect the other cultures extant in the society. This will cause less tension. For this to happen, it is crucial to have a media that is plural, to the extent of being representative of the different interest in the state. Oyovbaire (2000, p103) advocates for pluralism of the press in terms of an operational base that is diffused and a programming philosophy that is liberal and accommodating of interest other than that of the proprietors. Unfortunately, as Oyovbaire argues, the media has not only been concentrated in the south-west of Nigeria, particularly Lagos State, it is often seen to hold and highlight sectional opinions. In promoting national security, the media must educate and enlighten the citizens on the factors that unite them, while avoiding and dislodging divisive tendencies and sentiments (Odunlanmi, 1999:132).

Monday, October 21, 2019

Indus Civilization Timeline and Description

Indus Civilization Timeline and Description The Indus civilization (also known as the Harappan Civilization, the Indus-Sarasvati or Hakra Civilization and sometimes the Indus Valley Civilization) is one of the oldest societies we know of, including over 2600 known archaeological sites located along the Indus and Sarasvati rivers in Pakistan and India, an area of some 1.6 million square kilometers. The largest known Harappan site is Ganweriwala, located on the bank of the Sarasvati river. Timeline of the Indus Civilization Important sites are listed after each phase. Chalcolithic cultures 4300-3200 BCEarly Harappan 3500-2700 BC (Mohenjo-Daro, Mehrgarh, Jodhpura, Padri)Early Harappan/Mature Harappan Transition 2800-2700 BC (Kumal, Nausharo, Kot Diji, Nari)Mature Harappan 2700-1900 BC (Harappa, Mohenjo-Daro, Shortgua, Lothal, Nari)Late Harappan 1900-1500 BC (Lothal, Bet Dwarka) The earliest settlements of the Harappans were in Baluchistan, Pakistan, beginning about 3500 BC. These sites are an independent outgrowth of Chalcolithic cultures in place in south Asia between 3800-3500 BC. Early Harappan sites built mud brick houses, and carried on long-distance trade.The Mature Harappan sites are located along the Indus and Sarasvati rivers and their tributaries. They lived in planned communities of houses built of mud brick, burnt brick, and chiseled stone. Citadels were built at sites such as Harappa, Mohenjo-Daro, Dholavira and Ropar, with carved stone gateways and fortification walls. Around the citadels were an extensive range of water reservoirs. Trade with Mesopotamia, Egypt and the Persian gulf is in evidence between 2700-1900 BC. Indus Lifestyles Mature Harappan society had three classes, including a religious elite, a trading class class and the poor workers. Art of the Harappan includes bronze figures of men, women, animals, birds and toys cast with the lost was method. Terracotta figurines are rarer, but are known from some sites, as is shell, bone, semiprecious and clay jewelry.Seals carved from steatite squares contain the earliest forms of writing. Almost 6000 inscriptions have been found to date, although they have yet to be deciphered. Scholars are divided about whether the language is likely a form of Proto-Dravidian, Proto-Brahmi or Sanskrit. Early burials were primarily extended with grave goods; later burials were varied. Subsistence and Industry The earliest pottery made in the Harappan region was built beginning about 6000 BC, and included storage jars, perforated cylindrical towers and footed dishes. The copper/bronze industry flourished at sites such as Harappa and Lothal, and copper casting and hammering were used. Shell and bead making industry was very important, particularly at sites such as Chanhu-daro where mass production of beads and seals is in evidence.The Harappan people grew wheat, barley, rice, ragi, jowar, and cotton, and raised cattle, buffalo, sheep, goats and chickens. Camels, elephants, horses, and asses were used as transport. Late Harappan The Harappan civilization ended between about 2000 and 1900 BC, resulting from a combination of environmental factors such as flooding and climatic changes, tectonic activity, and the decline of trade with western societies.   Indus Civilization Research Archaeologists associated with the Indus Valley Civilizations include R.D. Banerji, John Marshall, N. Dikshit, Daya Ram Sahni, Madho Sarup Vats, Mortimer Wheeler. More recent work has been conducted by B.B. Lal, S.R. Rao, M.K. Dhavalikar, G.L. Possehl, J. F. Jarrige, Jonathon Mark Kenoyer, and Deo Prakash Sharma, among many others at the National Museum in New Delhi. Important Harappan Sites Ganweriwala, Rakhigarhi, Dhalewan, Mohenjo-Daro, Dholavira, Harappa, Nausharo, Kot Diji, and Mehrgarh, Padri. Sources An excellent source for detailed information of the Indus civilization and with lots of photographs is Harappa.com. For information on the Indus Script and Sanskrit, see Ancient Writing of India and Asia. Archaeological sites (both on About.com and elsewhere are compiled in Archaeological Sites of the Indus Civilization. A brief Bibliography of the Indus Civilization has also been compiled.