Monday, August 24, 2020

Assignment of Work Base Learning Essay Example for Free

Task of Work Base Learning Essay 1. Presentation. This task is to assess my job in the activity Im as of now doing and upto what degree its been useful in my self-improvement. As I m at present working in dismay sending industry (Shipping), I have picked this activity to complete my task of work base learning. It would be simpler for me to pick this activity to complete my instructive task as opposed to going chipping in work elsewhere. This will likewise influence my participation in current work place. 1.1 Company Profile- Reisa Freight Ltd. is a U.K. based organization occupied with import and fare exercises. As an organization we flexibly our administrations to purchasers, exporters and shippers for their global transportation needs. Reisa cargo Ltd. goes about as a center man/specialist working all inclusive with operators in a few nations. We handles send out from shippers or makers distribution center to purchasers stockroom not end clients. 1.2 Job Profile- The principle motivation behind my job is to arrange with clients, get ready pertinent docs and coordination with back office or tasks for smooth action. This activity job requires effectiveness, exactness and culmination in given time period. My activity is likewise engaged with coordination with carriers to guarantee pre-booked space for forthcoming load during the week. This dodges a minute ago issue. In short this work required a strong arranging and in-time execution. Likewise it requires understanding People at Work, including understanding others interests, inspirations and skill. To put it plainly, Developing and inspecting associations with others (director, partners, colleagues, clients and providers and so on) including concurring separate jobs, duties, rights and desires booking freight space on transport, plane, train, or some other type of products/payload transportation, course arranging, different documentation, send out pressing, protection, distribution center, assortment and conveyance transfer. 2. Fundamental Body- During my seven months of residency I have figured out how to increase important abilities and what I required most. There are sure aptitudes which I have to improve and some others I have accomplished while working with Reisa cargo Ltd. I have examined all these in following sections. 2.1 Skills that need a few turns of events. à ¯Ã¢ ¿Ã¢ ½ Communication- Significant obstacle with me here the language. Being English as my second language I discover it as greatest obstacle to improve my correspondence aptitude. There is significantly greater advancement since I began however there is still parcel more to do to bring it upto a level where it is adequate as elevated requirement. à ¯Ã¢ ¿Ã¢ ½ Decision-production I locate my self awkward while settling on significant choice which requires my independency. I have just gone through 7 months altogether as working individual. I would need to have more understanding of work to pick up my positive about dynamic. Some instruction in learning ability would help me which Im intending to allow in the wake of completing my graduation. à ¯Ã¢ ¿Ã¢ ½ Leadership- Being an un-experienced in past and absolute 7 months of work experience I see much more to do with initiative ability. Authority abilities require work understanding and a standard of instruction which I will increase after my investigations. à ¯Ã¢ ¿Ã¢ ½ Analyzing- Being another worker in the field I see an absence of breaking down expertise to examine the circumstance and plan things as needs be. This makes me to be relied upon my seniors and old workers. I for one imagine this would be created while the time went through with work and investing my amounts of energy to design it from the earliest starting point and executing it upto the end. In my activity profile I have been offered opportunity to break down every shipment from the earliest starting point and go about as important and in like manner. à ¯Ã¢ ¿Ã¢ ½ Problem Solving- Because of the absence of dynamic, it straightforwardly influences my critical thinking abilities. As another representative I havent been offered opportunity to settle on choices of my own which will be given to me after a particular time went through with in the organization. Presently in light of the fact that I can't settle on choices I should depend on my seniors to give me guidelines in these sorts of circumstance which will prompt critical thinking straightforwardly from leaders at above post not me. 2.2 Skills scored most noteworthy with. à ¯Ã¢ ¿Ã¢ ½ Planning- My activity profile requires a pre-arranged movement which is a base of administration duty to clients. First w learn in this business is to design things and afterward execute. The arranging doesnt give a hundred percent guarantee of wanted outcomes yet it prompts an approach to execute right thing and a perfect time. I for one feel that my activity duties made me alright to manage arranging expertise. Its adding a bonus regularly to my expertise. à ¯Ã¢ ¿Ã¢ ½ Monitoring- When the arranging has been made and executed second step comes to screen it on every single step. A break anyplace in the arranging chain may bomb the entire task. The motivation behind my observing is to deal with task and amend issues when and any place they come up. à ¯Ã¢ ¿Ã¢ ½ Reviewing- Looking into the work regular gives me productivity and capability in my work. A gifted surveying gives a thought what should be finished. During the work I have discovered that exploring all our day by day deeds gives us experience and plausible results of following day and future. This likewise shows the exhibition improvement. à ¯Ã¢ ¿Ã¢ ½ Prioritizing- As a specialist I figured out how to needs my work. Its a method of putting in a request of needs what necessities doing and when. This can be accomplished by Setting objectivise and objectives. Its a significant part of dynamic. In my work needs has significance as we should settle on choices rely upon the circumstances. For models in a circumstance where purchaser needs the entirety of his request with a few providers in once, however because of issue with space allotment we may need to settle on own choice at some point to needs to specific requests or shipments. à ¯Ã¢ ¿Ã¢ ½ Reporting- My activity profile is to fill in as an official. I have obligations and direct answering to my seniors. I have to report all my everyday exercises that its justifiable and most significant is worthy. A satisfactory standard of work has been increased through the work understanding. à ¯Ã¢ ¿Ã¢ ½ Motivating- Inspiration is need in each achievement. De-inspiration will prompt disappointment in the activity and work task. I have discovered that how to challenge my negative contemplations. It causes me to acknowledge prospects of my future. 3. Checking my self-level accomplished. Introduction Skills Competent Speaker-ready to converse with little gatherings of my friends but a little apprehensively. Composed Skills-Good Creative-ready to utilize correlation, model, likenesses, analogies, jargon and different instruments. Authoritative/Planning Skills-Limited-can design and arrange my own opportunity to accomplish targets. Group working Skills-Good-Able to function admirably in a group of individuals and to play out various group jobs. 4. End There is Overall execution fulfillment inside the association and as perceived by senior level. Seven months of residency inside the firm was spent similarly as a student. This helped me to increase a great deal. Yet at the same time there is far to go and unquestionably more to accomplish what I thought previously. There is part more certainty required while taking the important choices. An awkward circumstance consistently prompts lose either enormous or little. As of now Im working with the assistance of other experienced staff which likewise de spurs me to take my own drive. In any case, in closer future I trust in obligations with an autonomous job. That will doubtlessly assist me with gaining improved aptitudes and objectives. For at some point I have had a free thought of the objectives I might want to accomplish in the short to medium term. Since I have set my self a cutoff time Im certain and guaranteed to accomplish that. Anyway I might want to improve my fearlessness increment my inspiration to accomplish the most out of my work. I might want to take out the mentality that keeps me down and cause bothers to it lastly misery. I might want to expand my pride and fulfillment in my accomplishments focal points of objective settings. I might want to build my fearlessness from the present level and perform better in every aspect of my works.

Saturday, August 22, 2020

The Chinese Revolution Essay -- essays research papers

The same number of different nations around the globe China has its long history of a battle for balance and thriving against despots and tyrannies. The foundation of People’s Republic of China in 1949 appeared to have stopped that battle for a superior life. â€Å"The Chinese individuals have stood up!† pronounced Mao Tse-tung, the administrator of China’s Communist Party (CPP) †a main political power in the nation for the time. The individuals were characterized as an alliance of four social classes: the laborers, the workers, the modest bourgeoisie and the national-entrepreneurs. The four classes were to be driven purchase the CPP, as the pioneer of the regular workers. Without precedent for decades another Chinese government was met with harmony and expectation, rather than enormous rough restriction, inside its region. The administration and its political power, the CPP, were relied upon to satisfy century long dream of the Chinese individuals for â€Å"reason, freedom, progress and democracy.† The administration vowed to realize a string of quick political and conservative changes that would significantly improve life of each Chinese resident inside the life expectancy of one age. A guarantee of a huge land change that would give hotly anticipated land to a large number of worker families won their help for the new government. Around then the party’s individuals from laborer beginning represented about 90 percent. The Chinese intelligent people upheld the socialists for their guarantee to set up an assortment of fair organizations that ...

Saturday, July 25, 2020

anatomy of a hell week

anatomy of a hell week let me take a second to talk about the week ive had: last thursday, i went to not one but two P.E. classes because it was the last day of the quarter, and i also read three stories for CMS.307 workshop. the thursday evening section of CMS.307 coincided with a 6.867 exam, so i had to make that up the next morning at 8 a.m. i hadnt studied nearly enough, so i stayed up most of the night cramming material i had barely looked at before. afterwards, i went home and slept through a 14.121 review session, woke up in the evening, had a nice dinner with my friend, and then realized i had a bunch of unfinished work for my UROP. hanging over my head for the week was half of a 14.121 pset that id gotten stuck on. if mit is hell, i have descended to the 9th circle and am hangin out in the ice lake w/ judas and satan :~) i spent this weekend working on a behemoth of a homework assignment for 6.867 and trying to catch up on my late 14.121 homework and studying for a 14.121 final exam that is worth literally my entire grade in the class. melodrama filled the gaps. more 6.867 (behemoth!!) on monday and tuesday and then more 14.121 on tuesday night. im writing the first draft of this blog post on the tail end of an all-nighter spent cramming for 14.121. i had an 18.112 pset due at 10 am today that i will need to turn in late because ive used up my one and only extension in the course. i am supposed to have read the hunger games for CMS.307 but there was no time to read it (my bad); at least ive read it before. basically: sometimes you become extremely hosed and things are super rough and you sit in your room and eat an ungodly amount of junk food and cram for tests and sleep through lunch plans and do everything else you always advise other people not to do :( and its painful and not-fun, and you experience a lot of stress and snap at people you care about and wonder what in the world is the point of all of this. anyway i am so looking forward to the weekend and to completing a first draft of a short story for CMS.307, to halloween and next haunt and having a little time to see my friends/the light of day again! a photo from better days (boston harbor, 10/15)

Friday, May 22, 2020

The Killer of Hope Euthanasia - 1076 Words

When asked, â€Å"Why it is important to accept Euthanasia?† the answer is always about releasing patient from pain, but why take a naà ¯ve solution when there is hope? Take a second and think about how will a one say goodbye to the ones he love? The answer is obvious, it is impossible to let go of those we love. Hence, one should keep an open mind to the following lines whether you are against or for Euthanasia. Euthanasia or so called physician assisted death stand for intended cessation of person’s life at situation of terminal illness. This is done by either by proposing a fatal drug or withdrawing life-supporting therapy in order to end life of patient. Euthanasia is one of the most debatable issues nowadays as more and more people are questioning whether Euthanasia is mercy killing or hope killing. It is worth stating at this point that Euthanasia must be banned universally on account to ethical, medical and legal reasons. One of the most striking issues of Euthanasia is the ethical consideration. Supporters of Euthanasia usually suggest that we should respect patient’s autonomy and allow them to value their quality of lives. In addition to reducing risk of premature suicide. It is true that every person should have self-control over his life. However, one should take into consideration that legalization of euthanasia could lead to coercing on patients autonomy especially that a person desire to die may be influence by depression or even pain that is curable. A goodShow MoreRelatedEuthanasi Active And Passive Euthanasia995 Words   |  4 Pagesabout euthanasia in such depth until this assignment. It isn’t something completely new to me because I have heard about it, it happens everywhere, even if you or I don’t see it. But, I never gathered my thoughts about such a serious topic. Reading such opinions from these authors made me find out more about this topic but I cannot say I hav e came to a clear and set decision or opinion about euthanasia. As James Rachels states, â€Å"I can understand why some people are opposed to all euthanasia, and insistRead MoreEuthanasi Euthanasia And Euthanasia1483 Words   |  6 Pages Euthanasia is a long smooth-sounding word, and it conceals its danger as long, smooth-sounding words do, but the danger is there, nevertheless. As Pearl S. Buck explained through this quote, Euthanasia and medically assisted suicide to present a real danger. Although society refuses to see these dangers, euthanasia creates countless problems that shake society. Euthanasia remains a conditional based issue; therefore, the laws created rely on weak ideas that allow for easy manipulation, asRead MoreEssay on The Legalization of Euthanasia1420 Words   |  6 PagesEuthanasia legalization has been a controversial topic for years; studies have shown that arguments regarding the euthanasia debate are often depending on the process used to take the life of the patient. There are a lot of thoughts surrounding the issue of euthanasia and whether or not it should be legal. According to the Encyclopedia of American Law, euthanasia is categorized as a class of criminal homicide (Debate.org, par. 3). However, not all homicides are considered illegal. In today’s societyRead MoreEuthanasia Is Painless Killing Of A Patient1435 Words   |  6 PagesEuthanasia is painless killing of a patient suffering from an incurable and painful disease or in an irreversible coma, also means to take a deliberate action with the express intention of ending a life to relieve intractable suffering. Some interpret as the practice of ending life in a mercy killing, assisted suicide, and soft slow suicide. There are two main classifications of euthanasia. There is Voluntary euthanasia which is conducted with consent. Where the patient decides for themselves toRead MoreEssay about Legalizing Euthanasia990 Words   |  4 PagesLegalizing Euthanasia Whose life is it, anyway? A Plea stated by the late Sue Rodrigues. Rogrigues, a high-profile, terminally-ill resident of British Columbia, Canada, suffered from a terminally ill disease (Robinson, 2001). She was helped to commit suicide by a physician in violation of Canadian law. Many people, like Rodrigues, want to be in control of their final days. Terminally ill patients have a terminal disease and do not want to diminish their assets by incurring large medicalRead MoreA Theological Account Of Death And Dying2501 Words   |  11 PagesEthics Essay Two Draft Word Count: 2680 How should a theological account of death and dying shape the morale debate concerning euthanasia The debate on whether it is moral to assist in suicide or euthanasia has been very strong and heated by both sides of the argument, this debate has not gone away although the bill for the arguments for assisted suicide and euthanasia was lost in the UK parliament last year.[footnoteRef:1] Using the works of catholic theologians from the fourth century to theRead MoreEuthanasi Terminally Ill Patient1321 Words   |  6 Pagesact of euthanasia upon terminally ill patient. According to Oxford Dictionary, euthanasia means the painless killing of a patient suffering from an incurable and painful disease or in an irreversible coma and according to Euthanasia (2014), it is defined as the intentional killing by act or omission of a dependent human being for his or her alleged benefit. There are many kinds of euthanasia including voluntary, non-voluntary, involuntary, assisted suicide, euthanasia by action, and euthanasia by omissionRead MoreLegalization of Euthanasia in the United Kingdom11 06 Words   |  4 PagesShould euthanasia be legalised in the UK? The matter of euthanasia and assisted suicide is one of the most widely debated public policies in the UK today. Its legalisation will undoubtedly affect family and patient-doctor relationships and also challenge the concepts of what is considered to be ethical behaviour (Marker and Hamlon, 2005). But with overwhelming public support for its legalisation and unregulated assisted dying already common place in the medical profession (Doward 2004), surely aRead MorePhysician Assisted Suicide: Permissible or Not?1610 Words   |  7 PagesRachels and John Paul ll. James Rachels, an American philosopher who specialized in ethics, authored an article titled Active and Passive Euthanasia, which describes the difference between two forms of euthanasia. Active euthanasia is defined as a circumstance in which a doctor administers drugs into a patient’s body with intent to end their life. Passive euthanasia is when a doctor withdraws from giving their patient medical attention, knowing t hat without the care they will seize to survive. AfterRead MoreEuthanasia Is Not An Acceptable Form Of Euthanasia1556 Words   |  7 Pagesof patients by physicians, whether called â€Å"active euthanasia† or simply â€Å"euthanasia,† is a topic of long-standing controversy† (Mappes, Zembaty, and DeGrazia 59). â€Å"Although active euthanasia is presently illegal in all fifty states and the District of Columbia, proposals for its legalization have been recurrently advanced. Most commonly, these proposals call for the legalization of active euthanasia. There are some who consider active euthanasia in any form intrinsically immoral and, for this reason

Friday, May 8, 2020

Socialization Of Social Media - 1259 Words

Introduction Social media has become an integral part of our lives. We are witnessing that individuals are using various social media applications, being cheerful, grieving and signing actions that can change the systems. It seems that social media, which are not so much preoccupied with people who are easily pressured, the fashion of interaction with the internet channel, and the people they like, are now in the main interests of almost everybody who knows how to use computers. While social media is an environment in which people are avoided from socializing, being self-indulgent, more audience-oriented, and for some, socialization can emerge as a manifestation of desire to be appreciated and pursued within communities. Social media,†¦show more content†¦Social media online includes forums, blogs, chat rooms, e-mail, web sites, dictionaries, internet discussion platforms and social networks (Mangold and Faulds, 2009: 358). The most important feature of social media is that individuals can express themselves to others through the internet. Individuals create profiles through the sites they use and can communicate based on them, and they like and interact with others with the help of these profiles. Therefore, it is obliged to customize social media popularity according to the user (Hazar, 2011: 156). In this context, the most important feature that distinguishes the internet from the traditional means of communication is the strong emergence of the interaction in the communication process. In the traditional communication environment, the dominance of the communication process of the user is more prominent in interaction with the internet, despite the fact that the individuals in the communication are in receivership and the intervention to the communication process is limited (Timisi, 2003: The main features of social media can be listed as follows (MavnacÄ ±oÄŸlu, 2009: 64); †¢ It is a chain of internet applications where sharing and discussion are essential without time and space constraints. †¢ Individuals publish their own content on the internet and on the mobileShow MoreRelatedPrimary Socialization And Social Media1528 Words   |  7 PagesPrimary socialization In sociology this is the time when a person starts to acquire knowledge and skills through experiences in his/her environment when they are young. This process begins at home where one learns about the social norms and cultural practices that are accepted in the society. Primary socialization teaches children how to associate with people around them and this equips them with the vital concepts like love, trust, honesty, integrity and togetherness. Family, childhood friends,Read MoreSocial Media Can Destroy Socialization849 Words   |  4 PagesSocial Media Can Destroy Socialization Science and technology has become a catalyst for human development. In recent years, the introduction of computers and the internet has dramatically changed the way we live and interact. From medical discoveries to transportation innovation, information access to space exploration, the internet provided most of the changes in our society at least in the last two decades. However, one possible outcome of such modification may be seen negatively as social networkingRead MoreAgents Of Socialisation : The Mass Media1120 Words   |  5 PagesAgents of Socialisation : The Mass Media In the present day, the media is incorporated into our daily lives. Every day, through newspapers, radio, television, email, the internet and social media, are we sucked into an electronic world, which changes many of our beliefs and values about how we live our lives. It plays such a large role in almost every person’s life compared to 50 years ago, when the internet did not exist. It effects things such as our political views, tastes in music, views of menRead Moreculture and socialization Essay1533 Words   |  7 Pages Socialization can be defined as the process by which people learn to become members of a society (Tepperman Curtis, 2011, p.58). Thus, the socialization process of an individual starts from birth and continues throughout life. The period of socialization helps an individual to develop feelings, perceptions, learn the basics of social interaction and also learn to recognize and respond socially to parents and other important people in their lives (Tepperman Curtis, 2011, p.58). AccordingRead MoreAgents of Socialization Essay 21461 Words   |  6 PagesAgents of Socialization: An agent of socialization is an individual or institution tasked with the replication of the Social Order. An agent of socialization is responsible for transferring the rules, expectations, norms, values, and folkways of a given social order. In advanced capitalist society, the principle agents of socialization include the family, the media, the school system, religious and spiritual institutions, and peer groups. Specific sites or groups carry out socialization. We callRead MoreSocialization As A Function Of Media1561 Words   |  7 PagesSocialization as a Function of Media Mass media, significantly through mediums that project news and information, greatly affect what and how we learn about the world around us. In particular, television has become the outlet with the greatest socialization impact in its influence on young viewers. The distribution of information has become a part of the process by which people learn about societal values and behaviors and come to understand cultural expectations. Through entertainment and newsRead MoreEffects Of Technology On Socialization1300 Words   |  6 Pages Anti-social socialization: The effects of Technology on socialization of the youth in the 21st century Robert Elz University of North Georgia Abstract In the 21st century, technology is integrated in examines that to every aspect of our lives. It is prevalent is all sections of our culture, our homes, our schools and our communities. But what kind of effect is it having on those in their formative years? Does the abundance of technology have an effect on the newer generationsRead MoreSocialization As Mass Media, Influence Children s Ideologies On How They Should Act891 Words   |  4 Pagesthe discussion of socialization in chapter 5 of the text because it demonstrates how certain agents of socialization, particularly mass media, influence children’s ideologies on how they should act, look and feel. This ideology and contribution of social norms is supported by the article, from CBC news, suggesting how a teenage girl wants to â€Å"quit† social media in order to live in the real world. She went on to explain the reason for quitting her elite position on social media was for her 12-year-oldRead MoreEssay about The Cy cle of Socialization1111 Words   |  5 Pagesdominated by the messages that are constantly fed to us by the media. The media is so powerful that a majority of people do not even realize that it affects them in any way. In fact most people are convinced that they are completely unaffected by it. One of the reasons that the media is so powerful is because of the cycle of socialization. The cycle of socialization can open ones eyes to why our society has specific views ofRead MoreThe Agents Of Socialization : Andrew Vachss1233 Words   |  5 Pages2015 The Agents of Socialization Andrew Vachss--an American crime author, child protection consultant, and minority youth lawyer--once said, â€Å"All children are born pure egoists. They perceive their needs to the exclusion of all others. Only through socialization do they learn that some forms of gratification must be deferred and others denied† (Vachss). Vachss’ view that inequalities and prejudice are in fact learned behaviors is supported by analysis of the agents of socialization--the groups that

Wednesday, May 6, 2020

Non banking financial intermediaries Free Essays

Non Bank Financial Intermediaries are privately owned, decentralized and relatively small-sized financial intermediaries. Some are primarily engaged in fund-based activities and others provide financial services of diverse kinds. The former are know as Non Banking Financial NBFCs had undergone radical transformation. We will write a custom essay sample on Non banking financial intermediaries or any similar topic only for you Order Now The post 1995 overview is depicted with whatever nformation is available. NATURE There are thousands of NBFCs and only a small proportion of them report to the RBI. The RBI (Amendment) Act, 1997 defines NBFC as an â€Å"institution or company whose principal business is to accept deposits under any scheme or arrangement or in any other manner, and to lend in any manner. † As a result, a number of loan and investment companies registered under the Companies act by business houses for the purpose of investment in group companies How to cite Non banking financial intermediaries, Papers

Tuesday, April 28, 2020

Miller achieve emotional intensity Essay Example

Miller achieve emotional intensity Essay The Crucible is a highly emotional play, especially at the end of Act One and beginning of Act Two. Arthur Miller integrates many dramatic techniques including: interesting poignant characters, a great deal of dramatic and literacy devices, powerful language and themes of envy, power, hysteria and dignity. Miller also draws upon contextual significance; he uses the Salem witch trials as an allegory for the communist trials of McCarthyism. These techniques contribute to the overall emotional intensity at the end of Act One and the beginning of Act Two. The Crucible can be read by different audiences in varying ways; the original audience would have been more emotionally involved therefore finding it more emotionally intense. Nowadays, the audience may be more detached from the story finding it less emotionally intense. The Salem witch trials in The Crucible can be seen as a parallel to the McCarthyism era in which the play was written. To that audience many things that an audience today may not notice, would be deeply significant and emotional. The first similarity between the two eras is the way that the characters accuse other people of being involved with the devil to save themselves from punishment. This happens when Abigail says I saw Goody Sibber with the devil. Abigail is one of the many characters who is thought to be a witch, she then accuses someone else as being a witch thus, saving herself. This also happened in McCarthyism, if you accused someone else, you got a lighter sentence. This parallel with McCarthyism would have evoked great emotions at the time the play was first performed. We will write a custom essay sample on Miller achieve emotional intensity specifically for you for only $16.38 $13.9/page Order now We will write a custom essay sample on Miller achieve emotional intensity specifically for you FOR ONLY $16.38 $13.9/page Hire Writer We will write a custom essay sample on Miller achieve emotional intensity specifically for you FOR ONLY $16.38 $13.9/page Hire Writer The second parallel is the way the court would be more lenient if people pleaded guilty. This is shown at the beginning of Act Two when Elizabeth says The deputy governor promise hangin if theyll not confess. This not only evokes emotion because innocent people were not given a fair trial but at the time the play was first performed would have been seen as a parallel to McCarthys trials where if people did not confess under little or no evidence they would face a greater punishment. People would have been deeply moved by this significance. When the play was first performed during McCarthys times, Senator McCarthy believed that communists were going to spoil their way of life, the American dream. In The Crucible Salem is a theocratic society in which the law and the church are the same; they are worried that the devil is trying to ruin their way of life. The theocracy in the play is endorsed by the power that Reverend Parris has in the village, when he says you will confess yourself or I will take you out and whip you to death and when he says the devil can never overcome a minister. Both show the authority and power the church had in their society. Again this parallel would have a deep impact on the audience of millers time. A modern audience may relate the persecution of differences in Salem similar to the modern persecution of Islam and the middle east. Salem was perhaps a more extreme version of Senator McCarthys trials however their allegorical significance would still increase the emotional intensity of the play for its original audience.

Thursday, March 19, 2020

Geography of the Northern Hemisphere

Geography of the Northern Hemisphere The Northern Hemisphere is the northern half of the Earth. It begins at 0Â ° or the equator and continues north until it reaches 90Â °N latitude or the North Pole. The word hemisphere itself specifically means half of a sphere, and since the earth is considered an oblate sphere, a hemisphere is half. Geography and Climate Like the Southern Hemisphere, the Northern Hemisphere has a varied topography and climate. However, there is more land in the Northern Hemisphere so it is even more varied and this plays a role in the weather patterns and climate there. The land in the Northern Hemisphere consists of all of Europe, North America and Asia, a portion of South America, two-thirds of the African continent and a very small portion of the Australian continent with islands in New Guinea. Winter in the Northern Hemisphere lasts from around December 21 (the winter solstice) to the vernal equinox around March 20. Summer lasts from the summer solstice around June 21 to the autumnal equinox around September 21. These dates are due to the Earths axial tilt. From the period of December 21 to March 20, the northern hemisphere is tilted away from the sun, and during the June 21 to September 21 interval, it is tilted toward the sun. To aid in studying its climate, the Northern Hemisphere is divided into several different climatic regions. The Arctic is the area that is north of the Arctic Circle at 66.5Â °N. It has a climate with very cold winters and cool summers. In the winter, it is in complete darkness for 24 hours per day and in the summer it receives 24 hours of sunlight. South of the Arctic Circle to the Tropic of Cancer is the Northern Temperate Zone. This climatic area features mild summers and winters, but specific areas within the zone can have very different climatic patterns. For example, the southwestern United States features an arid desert climate with very hot summers, while the state of Florida in the southeastern U.S. features a humid subtropical climate with a rainy season and mild winters. The Northern Hemisphere also encompasses a portion of the Tropics between the Tropic of Cancer and the equator. This area is usually hot all year and has a rainy summer season. The Coriolis Effect An important component of the Northern Hemispheres physical geography is the Coriolis Effect and the specific direction that objects are deflected in the northern half of the Earth. In the northern hemisphere, any object moving over the Earths surface deflects to the right. Because of this, any large patterns in air or water turn clockwise north of the equator. For example, there are many large ocean gyres in the North Atlantic and North Pacific- all of which turn clockwise. In the Southern Hemisphere, these directions are reversed because objects are deflected to the left. In addition, the right deflection of objects impacts the flows of air over the Earth and air pressure systems. A high-pressure system, for example, is an area where the atmospheric pressure is greater than that of the surrounding area. In the Northern Hemisphere, these move clockwise because of the Coriolis Effect. By contrast, low-pressure systems or areas where atmospheric pressure is less than that of the surrounding area move counterclockwise because of the Coriolis Effect in the Northern Hemisphere. Population Because the Northern Hemisphere has more land area than the Southern Hemisphere it should also be noted that the majority of Earths population and its largest cities are also in its northern half. Some estimates say that the Northern Hemisphere is approximately 39.3% land, while the Southern half is only 19.1% land. Reference Wikipedia. (13 June 2010). Northern Hemisphere - Wikipedia, the Free Encyclopedia. Retrieved from: http://en.wikipedia.org/wiki/Northern_Hemisphere

Tuesday, March 3, 2020

Defining and Understanding Literacy

Defining and Understanding Literacy Simply put, literacy is the ability to read and write in at least one language. So just about everyone in developed countries is literate in the basic sense. In her book The Literacy Wars,  Ilana Snyder argues that there is no single, correct view of literacy that would be universally accepted. There are a number of competing definitions, and these definitions are continually changing and evolving. The following quotes raise several issues about literacy, its necessity, its power, and its evolution. Observations on Literacy Literacy is a human right, a tool of personal empowerment and a means for social and human development. Educational opportunities depend on literacy. Literacy is at the heart of basic education for all and essential for eradicating poverty, reducing child mortality, curbing population growth, achieving gender equality and ensuring sustainable development, peace, and democracy., Why Is Literacy Important? UNESCO, 2010The notion of basic literacy is used for the initial learning of reading and writing, which adults who have never been to school need to go through. The term functional literacy is kept for the level of reading and writing that adults are thought to need in a modern complex society. Use of the term underlines the idea that although people may have basic levels of literacy, they need a different level to operate in their day-to-day lives., David Barton, Literacy: An Introduction to the Ecology of Written Language,  2006To acquire literacy is more than to psychologically and mechanically dominate reading and writing techniques. It is to dominate those techniques in terms of consciousness; to understand what one reads and to write what one understands: It is to communicate graphically. Acquiring literacy does not involve memorizing sentences, words or syllables, lifeless objects unconnected to an existential universe, but rather an attitude of creation and re-creation, a self-transformation producing a stance of intervention in ones context., Paulo Freire, Education for Critical Consciousness, 1974 There is hardly an oral culture or a predominantly oral culture left in the world today that is not somehow aware of the vast complex of powers forever inaccessible without literacy., Walter J. Ong, Orality and Literacy: The Technologizing of the Word,  1982 Women and Literacy Joan Acocella, in a New Yorker review of the book The Woman Reader by Belinda Jack, had this to say in 2012: In the history of women, there is probably no matter, apart from contraception, more important than literacy. With the advent of the Industrial Revolution, access to the power required knowledge of the world. This could not be gained without reading and writing, skills that were granted to men long before they were to women. Deprived of them, women were condemned to stay home with the livestock or, if they were lucky, with the servants. (Alternatively, they may have been the servants.) Compared with men, they led mediocre lives. In thinking about wisdom, it helps to read about wisdom, about Solomon or Socrates or whomever. Likewise, goodness and happiness and love. To decide whether you have them or want to make the sacrifices necessary to get them, it is useful to read about them. Without such introspection, women seemed stupid; therefore, they were considered unfit for education; therefore, they weren’t given an education; therefore they seemed stupid.   A New Definition? Barry Sanders, in A Is for Ox: Violence, Electronic Media, and the Silencing of the Written Word (1994), makes a case for a changing definition of literacy in the technological age. We need a radical redefinition of literacy, one that includes a recognition of the vital importance that morality plays in shaping literacy. We need a radical redefinition of what it means for society to have all the appearances of literacy and yet to abandon the book as its dominant metaphor. We must understand what happens when the computer replaces the book as the prime metaphor for visualizing the self.It is important to remember that those who celebrate the intensities and discontinuities of postmodern electronic culture in print write from an advanced literacy. That literacy provides them the profound power of choosing their ideational repertoire. No such choice or power is available to the illiterate young person subjected to an endless stream of electronic images.

Sunday, February 16, 2020

Cross cultural management Case Study Example | Topics and Well Written Essays - 2250 words

Cross cultural management - Case Study Example   SICLI is a well-known company in security segment that has its operations running from past 90 years. The company originally had its operations localized only in France but in later stage business operations were expanded into other geographical areas. All types of security-related products like fire extinguishers, gas detection, fire detection, security training services, etc., are manufactured by the company. SICLI has expanded its operations into global markets like USA, European, African markets, etc. French expats are utilized by the firm so as to enhance efficiency level of African operations. The firm encompasses large base of experienced employees and it is not possible for them to adapt to a completely new environment. Employees of SICLI feel that they are a part of an organization which ensures job security. This firm recruited individual from the diverse cultural background but most integral part was played by French employees. The company witnessed challenges when it was taken over by another group Williams Holdings Plc. The new CEO employed certain organizational changes which were not accepted by majority employees since its implications or importance was not conveyed appropriately. It is important that top management while expanding globally needs to be well aligned with core business culture and values. Multinational strategies or structures incorporated by MNCs are of various types like multi-domestic, global, international and transnational.   

Sunday, February 2, 2020

Relationship Essay Example | Topics and Well Written Essays - 750 words

Relationship - Essay Example It puts us in a category of our own where we see things from the same eyes, even though we are two separate individuals who have their own respective worldviews (Holt 2005). Even though Austin likes to communicate with me on a consistent level, I have always asked him to meet me more than calling me on the phone because I believe our friendship is on such a level that phone calls could just demean us in someway. Hence it is best that we enjoy each other’s company and this can only happen when we meet regularly. Some important things that govern and essentially define our relationship include the respect that we have for our elders and the love and support to our mutual friends and colleagues. Both of us like to interact with kids, which automatically make us people who like to hang around children quite a lot. Austin and I are known to be extroverts which implies for our comprehension that we like to go out more and more, and thus enjoy the festivities of eating out, partying by the beach as well as a range of other fun-filled activities (Azzarone 2003). We sincerely love the feeling of being close to one another because this is how we view life in its own meticulous way. It also makes us enjoy the world around us together. These important aspects developed with the passage of time as we started knowing each other more and thus we found out that our common traits were very uncommon amongst the people around us. The exceptional two that we were actually made us feel good about our own selves, which strengthened our friendship all the same. Our relationship is more supportive than being defensive at any point in time. This is because we understand each other quite well and it makes our lives easier in contrast to how other best friends live their lives. Our relationship has blossomed with the passage of time and I cannot recall a single instance where we ended up arguing between our selves. I believe this is because both of us respect one another and look up to our unity for the help and assistance that we so may require. Our relationship has thus become a potent force because our trust levels have been tied to who we are and how we view our friendship (White 2002). Some of the specific factors that contribute to the situations which take place on a day to day level comprise of our interaction with the people around us. We both believe in giving our best when it comes to our elders, since we respect them a lot. We always make an effort to help the underprivileged and needy around us, which is something that I and Austin gain satisfaction from. If ever there was a conflict between me and Austin, we would resolve it amicably. This is because both of us believe in keeping away from conflicts and rifts. Fortunately, we have never had a fight as yet which gives us the edge to understand each other better. This is one way to know how much respect I hold for Austin and likewise (Costley 2007). We may have difference of opinions but this has never transpired into conflicts and hence the duration is of no use as far as its mention is concerned. The strategies that we use to resolve conflict would essentially take into perspective listening to one another and giving the other individual the much needed space so that he can think through things and get back. It is an important consideration and both I and Austin are well aware of that. I believe these methods have been quite satisfactory as conflicts are something that can literally mar the basis of any

Saturday, January 25, 2020

VaR Models in Predicting Equity Market Risk

VaR Models in Predicting Equity Market Risk Chapter 3 Research Design This chapter represents how to apply proposed VaR models in predicting equity market risk. Basically, the thesis first outlines the collected empirical data. We next focus on verifying assumptions usually engaged in the VaR models and then identifying whether the data characteristics are in line with these assumptions through examining the observed data. Various VaR models are subsequently discussed, beginning with the non-parametric approach (the historical simulation model) and followed by the parametric approaches under different distributional assumptions of returns and intentionally with the combination of the Cornish-Fisher Expansion technique. Finally, backtesting techniques are employed to value the performance of the suggested VaR models. 3.1. Data The data used in the study are financial time series that reflect the daily historical price changes for two single equity index assets, including the FTSE 100 index of the UK market and the SP 500 of the US market. Mathematically, instead of using the arithmetic return, the paper employs the daily log-returns. The full period, which the calculations are based on, stretches from 05/06/2002 to 22/06/2009 for each single index. More precisely, to implement the empirical test, the period will be divided separately into two sub-periods: the first series of empirical data, which are used to make the parameter estimation, spans from 05/06/2002 to 31/07/2007. The rest of the data, which is between 01/08/2007 and 22/06/2009, is used for predicting VaR figures and backtesting. Do note here is that the latter stage is exactly the current global financial crisis period which began from the August of 2007, dramatically peaked in the ending months of 2008 and signally reduced significantly in the middle of 2009. Consequently, the study will purposely examine the accuracy of the VaR models within the volatile time. 3.1.1. FTSE 100 index The FTSE 100 Index is a share index of the 100 most highly capitalised UK companies listed on the London Stock Exchange, began on 3rd January 1984. FTSE 100 companies represent about 81% of the market capitalisation of the whole London Stock Exchange and become the most widely used UK stock market indicator. In the dissertation, the full data used for the empirical analysis consists of 1782 observations (1782 working days) of the UK FTSE 100 index covering the period from 05/06/2002 to 22/06/2009. 3.1.2. SP 500 index The SP 500 is a value weighted index published since 1957 of the prices of 500 large-cap common stocks actively traded in the United States. The stocks listed on the SP 500 are those of large publicly held companies that trade on either of the two largest American stock market companies, the NYSE Euronext and NASDAQ OMX. After the Dow Jones Industrial Average, the SP 500 is the most widely followed index of large-cap American stocks. The SP 500 refers not only to the index, but also to the 500 companies that have their common stock included in the index and consequently considered as a bellwether for the US economy. Similar to the FTSE 100, the data for the SP 500 is also observed during the same period with 1775 observations (1775 working days). 3.2. Data Analysis For the VaR models, one of the most important aspects is assumptions relating to measuring VaR. This section first discusses several VaR assumptions and then examines the collected empirical data characteristics. 3.2.1. Assumptions 3.2.1.1. Normality assumption Normal distribution As mentioned in the chapter 2, most VaR models assume that return distribution is normally distributed with mean of 0 and standard deviation of 1 (see figure 3.1). Nonetheless, the chapter 2 also shows that the actual return in most of previous empirical investigations does not completely follow the standard distribution. Figure 3.1: Standard Normal Distribution Skewness The skewness is a measure of asymmetry of the distribution of the financial time series around its mean. Normally data is assumed to be symmetrically distributed with skewness of 0. A dataset with either a positive or negative skew deviates from the normal distribution assumptions (see figure 3.2). This can cause parametric approaches, such as the Riskmetrics and the symmetric normal-GARCH(1,1) model under the assumption of standard distributed returns, to be less effective if asset returns are heavily skewed. The result can be an overestimation or underestimation of the VaR value depending on the skew of the underlying asset returns. Figure 3.2: Plot of a positive or negative skew Kurtosis The kurtosis measures the peakedness or flatness of the distribution of a data sample and describes how concentrated the returns are around their mean. A high value of kurtosis means that more of data’s variance comes from extreme deviations. In other words, a high kurtosis means that the assets returns consist of more extreme values than modeled by the normal distribution. This positive excess kurtosis is, according to Lee and Lee (2000) called leptokurtic and a negative excess kurtosis is called platykurtic. The data which is normally distributed has kurtosis of 3. Figure 3.3: General forms of Kurtosis Jarque-Bera Statistic In statistics, Jarque-Bera (JB) is a test statistic for testing whether the series is normally distributed. In other words, the Jarque-Bera test is a goodness-of-fit measure of departure from normality, based on the sample kurtosis and skewness. The test statistic JB is defined as: where n is the number of observations, S is the sample skewness, K is the sample kurtosis. For large sample sizes, the test statistic has a Chi-square distribution with two degrees of freedom. Augmented Dickey–Fuller Statistic Augmented Dickey–Fuller test (ADF) is a test for a unit root in a time series sample. It is an augmented version of the Dickey–Fuller test for a larger and more complicated set of time series models. The ADF statistic used in the test is a negative number. The more negative it is, the stronger the rejection of the hypothesis that there is a unit root at some level of confidence. ADF critical values: (1%) –3.4334, (5%) –2.8627, (10%) –2.5674. 3.2.1.2. Homoscedasticity assumption Homoscedasticity refers to the assumption that the dependent variable exhibits similar amounts of variance across the range of values for an independent variable. Figure 3.4: Plot of Homoscedasticity Unfortunately, the chapter 2, based on the previous empirical studies confirmed that the financial markets usually experience unexpected events, uncertainties in prices (and returns) and exhibit non-constant variance (Heteroskedasticity). Indeed, the volatility of financial asset returns changes over time, with periods when volatility is exceptionally high interspersed with periods when volatility is unusually low, namely volatility clustering. It is one of the widely stylised facts (stylised statistical properties of asset returns) which are common to a common set of financial assets. The volatility clustering reflects that high-volatility events tend to cluster in time. 3.2.1.3. Stationarity assumption According to Cont (2001), the most essential prerequisite of any statistical analysis of market data is the existence of some statistical properties of the data under study which remain constant over time, if not it is meaningless to try to recognize them. One of the hypotheses relating to the invariance of statistical properties of the return process in time is the stationarity. This hypothesis assumes that for any set of time instants ,†¦, and any time interval the joint distribution of the returns ,†¦, is the same as the joint distribution of returns ,†¦,. The Augmented Dickey-Fuller test, in turn, will also be used to test whether time-series models are accurately to examine the stationary of statistical properties of the return. 3.2.1.4. Serial independence assumption There are a large number of tests of randomness of the sample data. Autocorrelation plots are one common method test for randomness. Autocorrelation is the correlation between the returns at the different points in time. It is the same as calculating the correlation between two different time series, except that the same time series is used twice once in its original form and once lagged one or more time periods. The results can range from  +1 to -1. An autocorrelation of  +1 represents perfect positive correlation (i.e. an increase seen in one time series will lead to a proportionate increase in the other time series), while a value of -1 represents perfect negative correlation (i.e. an increase seen in one time series results in a proportionate decrease in the other time series). In terms of econometrics, the autocorrelation plot will be examined based on the Ljung-Box Q statistic test. However, instead of testing randomness at each distinct lag, it tests the overall randomness based on a number of lags. The Ljung-Box test can be defined as: where n is the sample size,is the sample autocorrelation at lag j, and h is the number of lags being tested. The hypothesis of randomness is rejected if whereis the percent point function of the Chi-square distribution and the ÃŽ ± is the quantile of the Chi-square distribution with h degrees of freedom. 3.2.2. Data Characteristics Table 3.1 gives the descriptive statistics for the FTSE 100 and the SP 500 daily stock market prices and returns. Daily returns are computed as logarithmic price relatives: Rt = ln(Pt/pt-1), where Pt is the closing daily price at time t. Figures 3.5a and 3.5b, 3.6a and 3.6b present the plots of returns and price index over time. Besides, Figures 3.7a and 3.7b, 3.8a and 3.8b illustrate the combination between the frequency distribution of the FTSE 100 and the SP 500 daily return data and a normal distribution curve imposed, spanning from 05/06/2002 through 22/06/2009. Table 3.1: Diagnostics table of statistical characteristics on the returns of the FTSE 100 Index and SP 500 index between 05/06/2002 and 22/6/2009. DIAGNOSTICS SP 500 FTSE 100 Number of observations 1774 1781 Largest return 10.96% 9.38% Smallest return -9.47% -9.26% Mean return -0.0001 -0.0001 Variance 0.0002 0.0002 Standard Deviation 0.0144 0.0141 Skewness -0.1267 -0.0978 Excess Kurtosis 9.2431 7.0322 Jarque-Bera 694.485*** 2298.153*** Augmented Dickey-Fuller (ADF) 2 -37.6418 -45.5849 Q(12) 20.0983* Autocorre: 0.04 93.3161*** Autocorre: 0.03 Q2 (12) 1348.2*** Autocorre: 0.28 1536.6*** Autocorre: 0.25 The ratio of SD/mean 144 141 Note: 1. *, **, and *** denote significance at the 10%, 5%, and 1% levels, respectively. 2. 95% critical value for the augmented Dickey-Fuller statistic = -3.4158 Figure 3.5a: The FTSE 100 daily returns from 05/06/2002 to 22/06/2009 Figure 3.5b: The SP 500 daily returns from 05/06/2002 to 22/06/2009 Figure 3.6a: The FTSE 100 daily closing prices from 05/06/2002 to 22/06/2009 Figure 3.6b: The SP 500 daily closing prices from 05/06/2002 to 22/06/2009 Figure 3.7a: Histogram showing the FTSE 100 daily returns combined with a normal distribution curve, spanning from 05/06/2002 through 22/06/2009 Figure 3.7b: Histogram showing the SP 500 daily returns combined with a normal distribution curve, spanning from 05/06/2002 through 22/06/2009 Figure 3.8a: Diagram showing the FTSE 100’ frequency distribution combined with a normal distribution curve, spanning from 05/06/2002 through 22/06/2009 Figure 3.8b: Diagram showing the SP 500’ frequency distribution combined with a normal distribution curve, spanning from 05/06/2002 through 22/06/2009 The Table 3.1 shows that the FTSE 100 and the SP 500 average daily return are approximately 0 percent, or at least very small compared to the sample standard deviation (the standard deviation is 141 and 144 times more than the size of the average return for the FTSE 100 and SP 500, respectively). This is why the mean is often set at zero when modelling daily portfolio returns, which reduces the uncertainty and imprecision of the estimates. In addition, large standard deviation compared to the mean supports the evidence that daily changes are dominated by randomness and small mean can be disregarded in risk measure estimates. Moreover, the paper also employes five statistics which often used in analysing data, including Skewness, Kurtosis, Jarque-Bera, Augmented Dickey-Fuller (ADF) and Ljung-Box test to examining the empirical full period, crossing from 05/06/2002 through 22/06/2009. Figure 3.7a and 3.7b demonstrate the histogram of the FTSE 100 and the SP 500 daily return data with the normal distribution imposed. The distribution of both the indexes has longer, fatter tails and higher probabilities for extreme events than for the normal distribution, in particular on the negative side (negative skewness implying that the distribution has a long left tail). Fatter negative tails mean a higher probability of large losses than the normal distribution would suggest. It is more peaked around its mean than the normal distribution, Indeed, the value for kurtosis is very high (10 and 12 for the FTSE 100 and the SP 500, respectively compared to 3 of the normal distribution) (also see Figures 3.8a and 3.8b for more details). In other words, the most prominent deviation from the normal distributional assumption is the kurtosis, which can be seen from the middle bars of the histogram rising above the normal distribution. Moreover, it is obvious that outliers still exist, which indicates that excess kurtosis is still present. The Jarque-Bera test rejects normality of returns at the 1% level of significance for both the indexes. So, the samples have all financial characteristics: volatility clustering and leptokurtosis. Besides that, the daily returns for both the indexes (presented in Figure 3.5a and 3.5b) reveal that volatility occurs in bursts; particularly the returns were very volatile at the beginning of examined period from June 2002 to the middle of June 2003. After remaining stable for about 4 years, the returns of the two well-known stock indexes in the world were highly volatile from July 2007 (when the credit crunch was about to begin) and even dramatically peaked since July 2008 to the end of June 2009. Generally, there are two recognised characteristics of the collected daily data. First, extreme outcomes occur more often and are larger than that predicted by the normal distribution (fat tails). Second, the size of market movements is not constant over time (conditional volatility). In terms of stationary, the Augmented Dickey-Fuller is adopted for the unit root test. The null hypothesis of this test is that there is a unit root (the time series is non-stationary). The alternative hypothesis is that the time series is stationary. If the null hypothesis is rejected, it means that the series is a stationary time series. In this thesis, the paper employs the ADF unit root test including an intercept and a trend term on return. The results from the ADF tests indicate that the test statistis for the FTSE 100 and the SP 500 is -45.5849 and -37.6418, respectively. Such values are significantly less than the 95% critical value for the augmented Dickey-Fuller statistic (-3.4158). Therefore, we can reject the unit root null hypothesis and sum up that the daily return series is robustly stationary. Finally, Table 3.1 shows the Ljung-Box test statistics for serial correlation of the return and squared return series for k = 12 lags, denoted by Q(k) and Q2(k), respectively. The Q(12) statistic is statistically significant implying the present of serial correlation in the FTSE 100 and the SP 500 daily return series (first moment dependencies). In other words, the return series exhibit linear dependence. Figure 3.9a: Autocorrelations of the FTSE 100 daily returns for Lags 1 through 100, covering 05/06/2002 to 22/06/2009. Figure 3.9b: Autocorrelations of the SP 500 daily returns for Lags 1 through 100, covering 05/06/2002 to 22/06/2009. Figures 3.9a and 3.9b and the autocorrelation coefficient (presented in Table 3.1) tell that the FTSE 100 and the SP 500 daily return did not display any systematic pattern and the returns have very little autocorrelations. According to Christoffersen (2003), in this situation we can write: Corr(Rt+1,Rt+1-ÃŽ ») ≈ 0, for ÃŽ » = 1,2,3†¦, 100 Therefore, returns are almost impossible to predict from their own past. One note is that since the mean of daily returns for both the indexes (-0.0001) is not significantly different from zero, and therefore, the variances of the return series are measured by squared returns. The Ljung-Box Q2 test statistic for the squared returns is much higher, indicating the presence of serial correlation in the squared return series. Figures 3.10a and 3.10b) and the autocorrelation coefficient (presented in Table 3.1) also confirm the autocorrelations in squared returns (variances) for the FTSE 100 and the SP 500 data, and more importantly, variance displays positive correlation with its own past, especially with short lags. Corr(R2t+1,R2t+1-ÃŽ ») > 0, for ÃŽ » = 1,2,3†¦, 100 Figure 3.10a: Autocorrelations of the FTSE 100 squared daily returns Figure 3.10b: Autocorrelations of the SP 500 squared daily returns 3.3. Calculation of Value At Risk The section puts much emphasis on how to calculate VaR figures for both single return indexes from proposed models, including the Historical Simulation, the Riskmetrics, the Normal-GARCH(1,1) (or N-GARCH(1,1)) and the Student-t GARCH(1,1) (or t-GARCH(1,1)) model. Except the historical simulation model which does not make any assumptions about the shape of the distribution of the assets returns, the other ones commonly have been studied under the assumption that the returns are normally distributed. Based on the previous section relating to the examining data, this assumption is rejected because observed extreme outcomes of the both single index returns occur more often and are larger than predicted by the normal distribution. Also, the volatility tends to change through time and periods of high and low volatility tend to cluster together. Consequently, the four proposed VaR models under the normal distribution either have particular limitations or unrealistic. Specifically, the historical simulation significantly assumes that the historically simulated returns are independently and identically distributed through time. Unfortunately, this assumption is impractical due to the volatility clustering of the empirical data. Similarly, although the Riskmetrics tries to avoid relying on sample observations and make use of additional information contained in the assumed distribution function, its normally distributional assumption is also unrealistic from the results of examining the collected data. The normal-GARCH(1,1) model and the student-t GARCH(1,1) model, on the other hand, can capture the fat tails and volatility clustering which occur in the observed financial time series data, but their returns standard distributional assumption is also impossible comparing to the empirical data. Despite all these, the thesis still uses the four models under the standard distributional assumption of returns to comparing and evaluating their estimated results with the predicted results based on the student distributional assumption of returns. Besides, since the empirical data experiences fatter tails more than that of the normal distribution, the essay intentionally employs the Cornish-Fisher Expansion technique to correct the z-value from the normal distribution to account for fatter tails, and then compare these results with the two results above. Therefore, in this chapter, we purposely calculate VaR by separating these three procedures into three different sections and final results will be discussed in length in chapter 4. 3.3.1. Components of VaR measures Throughout the analysis, a holding period of one-trading day will be used. For the significance level, various values for the left tail probability level will be considered, ranging from the very conservative level of 1 percent to the mid of 2.5 percent and to the less cautious 5 percent. The various VaR models will be estimated using the historical data of the two single return index samples, stretches from 05/06/2002 through 31/07/2007 (consisting of 1305 and 1298 prices observations for the FTSE 100 and the SP 500, respectively) for making the parameter estimation, and from 01/08/2007 to 22/06/2009 for predicting VaRs and backtesting. One interesting point here is that since there are few previous empirical studies examining the performance of VaR models during periods of financial crisis, the paper deliberately backtest the validity of VaR models within the current global financial crisis from the beginning in August 2007. 3.3.2. Calculation of VaR 3.3.2.1. Non-parametric approach Historical Simulation As mentioned above, the historical simulation model pretends that the change in market factors from today to tomorrow will be the same as it was some time ago, and therefore, it is computed based on the historical returns distribution. Consequently, we separate this non-parametric approach into a section. The chapter 2 has proved that calculating VaR using the historical simulation model is not mathematically complex since the measure only requires a rational period of historical data. Thus, the first task is to obtain an adequate historical time series for simulating. There are many previous studies presenting that predicted results of the model are relatively reliable once the window length of data used for simulating daily VaRs is not shorter than 1000 observed days. In this sense, the study will be based on a sliding window of the previous 1305 and 1298 prices observations (1304 and 1297 returns observations) for the FTSE 100 and the SP 500, respectively, spanning from 05/06/2002 through 31/07/2007. We have selected this rather than larger windows is since adding more historical data means adding older historical data which could be irrelevant to the future development of the returns indexes. After sorting in ascending order the past returns attributed to equally spaced classes, the predicted VaRs are determined as that log-return lies on the target percentile, say, in the thesis is on three widely percentiles of 1%, 2.5% and 5% lower tail of the return distribution. The result is a frequency distribution of returns, which is displayed as a histogram, and shown in Figure 3.11a and 3.11b below. The vertical axis shows the number of days on which returns are attributed to the various classes. The red vertical lines in the histogram separate the lowest 1%, 2.5% and 5% returns from the remaining (99%, 97.5% and 95%) returns. For FTSE 100, since the histogram is drawn from 1304 daily returns, the 99%, 97.5% and 95% daily VaRs are approximately the 13th, 33rd and 65th lowest return in this dataset which are -3.2%, -2.28% and -1.67%, respectively and are roughly marked in the histogram by the red vertical lines. The interpretation is that the VaR gives a number such that there is, say, a 1% chance of losing more than 3.2% of the single asset value tomorrow (on 01st August 2007). The SP 500 VaR figures, on the other hand, are little bit smaller than that of the UK stock index with -2.74%, -2.03% and -1.53% corresponding to 99%, 97.5% and 95% confidence levels, respectively. Figure 3.11a: Histogram of daily returns of FTSE 100 between 05/06/2002 and 31/07/2007 Figure 3.11b: Histogram of daily returns of SP 500 between 05/06/2002 and 31/07/2007 Following predicted VaRs on the first day of the predicted period, we continuously calculate VaRs for the estimated period, covering from 01/08/2007 to 22/06/2009. The question is whether the proposed non-parametric model is accurately performed in the turbulent period will be discussed in length in the chapter 4. 3.3.2.2. Parametric approaches under the normal distributional assumption of returns This section presents how to calculate the daily VaRs using the parametric approaches, including the RiskMetrics, the normal-GARCH(1,1) and the student-t GARCH(1,1) under the standard distributional assumption of returns. The results and the validity of each model during the turbulent period will deeply be considered in the chapter 4. 3.3.2.2.1. The RiskMetrics Comparing to the historical simulation model, the RiskMetrics as discussed in the chapter 2 does not solely rely on sample observations; instead, they make use of additional information contained in the normal distribution function. All that needs is the current estimate of volatility. In this sense, we first calculate daily RiskMetrics variance for both the indexes, crossing the parameter estimated period from 05/06/2002 to 31/07/2007 based on the well-known RiskMetrics variance formula (2.9). Specifically, we had the fixed decay factor ÃŽ »=0.94 (the RiskMetrics system suggested using ÃŽ »=0.94 to forecast one-day volatility). Besides, the other parameters are easily calculated, for instance, and are the squared log-return and variance of the previous day, correspondingly. After calculating the daily variance, we continuously measure VaRs for the forecasting period from 01/08/2007 to 22/06/2009 under different confidence levels of 99%, 97.5% and 95% based on the normal VaR formula (2.6), where the critical z-value of the normal distribution at each significance level is simply computed using the Excel function NORMSINV. 3.3.2.2.2. The Normal-GARCH(1,1) model For GARCH models, the chapter 2 confirms that the most important point is to estimate the model parameters ,,. These parameters has to be calculated for numerically, using the method of maximum likelihood estimation (MLE). In fact, in order to do the MLE function, many previous studies efficiently use professional econometric softwares rather than handling the mathematical calculations. In the light of evidence, the normal-GARCH(1,1) is executed by using a well-known econometric tool, STATA, to estimate the model parameters (see Table 3.2 below). Table 3.2. The parameters statistics of the Normal-GARCH(1,1) model for the FTSE 100 and the SP 500 Normal-GARCH(1,1)* Parameters FTSE 100 SP 500 0.0955952 0.0555244 0.8907231 0.9289999 0.0000012 0.0000011 + 0.9863183 0.9845243 Number of Observations 1304 1297 Log likelihood 4401.63 4386.964 * Note: In this section, we report the results from the Normal-GARCH(1,1) model using the method of maximum likelihood, under the assumption that the errors conditionally follow the normal distribution with significance level of 5%. According to Table 3.2, the coefficients of the lagged squared returns () for both the indexes are positive, concluding that strong ARCH effects are apparent for both the financial markets. Also, the coefficients of lagged conditional variance () are significantly positive and less than one, indicating that the impact of ‘old’ news on volatility is significant. The magnitude of the coefficient, is especially high (around 0.89 – 0.93), indicating a long memory in the variance. The estimate of was 1.2E-06 for the FTSE 100 and 1.1E-06 for the SP 500 implying a long run standard deviation of daily market return of about 0.94% and 0.84%, respectively. The log-likehood for this model for both the indexes was 4401.63 and 4386.964 for the FTSE 100 and the SP 500, correspondingly. The Log likehood ratios rejected the hypothesis of normality very strongly. After calculating the model parameters, we begin measuring conditional variance (volatility) for the parameter estimated period, covering from 05/06/2002 to 31/07/2007 based on the conditional variance formula (2.11), where and are the squared log-return and conditional variance of the previous day, respectively. We then measure predicted daily VaRs for the forecasting period from 01/08/2007 to 22/06/2009 under confidence levels of 99%, 97.5% and 95% using the normal VaR formula (2.6). Again, the critical z-value of the normal distribution under significance levels of 1%, 2.5% and 5% is purely computed using the Excel function NORMSINV. 3.3.2.2.3. The Student-t GARCH(1,1) model Different from the Normal-GARCH(1,1) approach, the model assumes that the volatility (or the errors of the returns) follows the Student-t distribution. In fact, many previous studies suggested that using the symmetric GARCH(1,1) model with the volatility following the Student-t distribution is more accurate than with that of the Normal distribution when examining financial time series. Accordingly, the paper additionally employs the Student-t GARCH(1,1) approach to measure VaRs. In this section, we use this model under the normal distributional assumption of returns. First is to estimate the model parameters using the method of maximum likelihood estimation and obtained by the STATA (see Table 3.3). Table 3.3. The parameters statistics of the Student-t GARCH(1,1) model for the FTSE 100 and the SP 500 Student-t GARCH(1,1)* Parameters FTSE 100 SP 500 0.0926120 0.0569293 0.8946485 0.9354794 0.0000011 0.0000006 + 0.9872605 0.9924087 Number of Observations 1304 1297 Log likelihood 4406.50 4399.24 * Note: In this section, we report the results from the Student-t GARCH(1,1) model using the method of maximum likelihood, under the assumption that the errors conditionally follow the student distribution with significance level of 5%. The Table 3.3 also identifies the same characteristics of the student-t GARCH(1,1) model parameters comparing to the normal-GARCH(1,1) approach. Specifically, the results of , expose that there were evidently strong ARCH effects occurred on the UK and US financial markets during the parameter estimated period, crossing from 05/06/2002 to 31/07/2007. Moreover, as Floros (2008) mentioned, there was also the considerable impact of ‘old’ news on volatility as well as a long memory in the variance. We at that time follow the similar steps as calculating VaRs using the normal-GARCH(1,1) model. 3.3.2.3. Parametric approaches under the normal distributional assumption of returns modified by the Cornish-Fisher Expansion technique The section 3.3.2.2 measured the VaRs using the parametric approaches under the assumption that the returns are normally distributed. Regardless of their results and performance, it is clearly that this assumption is impractical since the fact that the collected empirical data experiences fatter tails more than that of the normal distribution. Consequently, in this section the study intentionally employs the Cornish-Fisher Expansion (CFE) technique to correct the z-value from the assumption of the normal distribution to significantly account for fatter tails. Again, the question of whether the proposed models achieved powerfully within the recent damage time will be assessed in length in the chapter 4. 3.3.2.3.1. The CFE-modified RiskMetrics Similar VaR Models in Predicting Equity Market Risk VaR Models in Predicting Equity Market Risk Chapter 3 Research Design This chapter represents how to apply proposed VaR models in predicting equity market risk. Basically, the thesis first outlines the collected empirical data. We next focus on verifying assumptions usually engaged in the VaR models and then identifying whether the data characteristics are in line with these assumptions through examining the observed data. Various VaR models are subsequently discussed, beginning with the non-parametric approach (the historical simulation model) and followed by the parametric approaches under different distributional assumptions of returns and intentionally with the combination of the Cornish-Fisher Expansion technique. Finally, backtesting techniques are employed to value the performance of the suggested VaR models. 3.1. Data The data used in the study are financial time series that reflect the daily historical price changes for two single equity index assets, including the FTSE 100 index of the UK market and the SP 500 of the US market. Mathematically, instead of using the arithmetic return, the paper employs the daily log-returns. The full period, which the calculations are based on, stretches from 05/06/2002 to 22/06/2009 for each single index. More precisely, to implement the empirical test, the period will be divided separately into two sub-periods: the first series of empirical data, which are used to make the parameter estimation, spans from 05/06/2002 to 31/07/2007. The rest of the data, which is between 01/08/2007 and 22/06/2009, is used for predicting VaR figures and backtesting. Do note here is that the latter stage is exactly the current global financial crisis period which began from the August of 2007, dramatically peaked in the ending months of 2008 and signally reduced significantly in the middle of 2009. Consequently, the study will purposely examine the accuracy of the VaR models within the volatile time. 3.1.1. FTSE 100 index The FTSE 100 Index is a share index of the 100 most highly capitalised UK companies listed on the London Stock Exchange, began on 3rd January 1984. FTSE 100 companies represent about 81% of the market capitalisation of the whole London Stock Exchange and become the most widely used UK stock market indicator. In the dissertation, the full data used for the empirical analysis consists of 1782 observations (1782 working days) of the UK FTSE 100 index covering the period from 05/06/2002 to 22/06/2009. 3.1.2. SP 500 index The SP 500 is a value weighted index published since 1957 of the prices of 500 large-cap common stocks actively traded in the United States. The stocks listed on the SP 500 are those of large publicly held companies that trade on either of the two largest American stock market companies, the NYSE Euronext and NASDAQ OMX. After the Dow Jones Industrial Average, the SP 500 is the most widely followed index of large-cap American stocks. The SP 500 refers not only to the index, but also to the 500 companies that have their common stock included in the index and consequently considered as a bellwether for the US economy. Similar to the FTSE 100, the data for the SP 500 is also observed during the same period with 1775 observations (1775 working days). 3.2. Data Analysis For the VaR models, one of the most important aspects is assumptions relating to measuring VaR. This section first discusses several VaR assumptions and then examines the collected empirical data characteristics. 3.2.1. Assumptions 3.2.1.1. Normality assumption Normal distribution As mentioned in the chapter 2, most VaR models assume that return distribution is normally distributed with mean of 0 and standard deviation of 1 (see figure 3.1). Nonetheless, the chapter 2 also shows that the actual return in most of previous empirical investigations does not completely follow the standard distribution. Figure 3.1: Standard Normal Distribution Skewness The skewness is a measure of asymmetry of the distribution of the financial time series around its mean. Normally data is assumed to be symmetrically distributed with skewness of 0. A dataset with either a positive or negative skew deviates from the normal distribution assumptions (see figure 3.2). This can cause parametric approaches, such as the Riskmetrics and the symmetric normal-GARCH(1,1) model under the assumption of standard distributed returns, to be less effective if asset returns are heavily skewed. The result can be an overestimation or underestimation of the VaR value depending on the skew of the underlying asset returns. Figure 3.2: Plot of a positive or negative skew Kurtosis The kurtosis measures the peakedness or flatness of the distribution of a data sample and describes how concentrated the returns are around their mean. A high value of kurtosis means that more of data’s variance comes from extreme deviations. In other words, a high kurtosis means that the assets returns consist of more extreme values than modeled by the normal distribution. This positive excess kurtosis is, according to Lee and Lee (2000) called leptokurtic and a negative excess kurtosis is called platykurtic. The data which is normally distributed has kurtosis of 3. Figure 3.3: General forms of Kurtosis Jarque-Bera Statistic In statistics, Jarque-Bera (JB) is a test statistic for testing whether the series is normally distributed. In other words, the Jarque-Bera test is a goodness-of-fit measure of departure from normality, based on the sample kurtosis and skewness. The test statistic JB is defined as: where n is the number of observations, S is the sample skewness, K is the sample kurtosis. For large sample sizes, the test statistic has a Chi-square distribution with two degrees of freedom. Augmented Dickey–Fuller Statistic Augmented Dickey–Fuller test (ADF) is a test for a unit root in a time series sample. It is an augmented version of the Dickey–Fuller test for a larger and more complicated set of time series models. The ADF statistic used in the test is a negative number. The more negative it is, the stronger the rejection of the hypothesis that there is a unit root at some level of confidence. ADF critical values: (1%) –3.4334, (5%) –2.8627, (10%) –2.5674. 3.2.1.2. Homoscedasticity assumption Homoscedasticity refers to the assumption that the dependent variable exhibits similar amounts of variance across the range of values for an independent variable. Figure 3.4: Plot of Homoscedasticity Unfortunately, the chapter 2, based on the previous empirical studies confirmed that the financial markets usually experience unexpected events, uncertainties in prices (and returns) and exhibit non-constant variance (Heteroskedasticity). Indeed, the volatility of financial asset returns changes over time, with periods when volatility is exceptionally high interspersed with periods when volatility is unusually low, namely volatility clustering. It is one of the widely stylised facts (stylised statistical properties of asset returns) which are common to a common set of financial assets. The volatility clustering reflects that high-volatility events tend to cluster in time. 3.2.1.3. Stationarity assumption According to Cont (2001), the most essential prerequisite of any statistical analysis of market data is the existence of some statistical properties of the data under study which remain constant over time, if not it is meaningless to try to recognize them. One of the hypotheses relating to the invariance of statistical properties of the return process in time is the stationarity. This hypothesis assumes that for any set of time instants ,†¦, and any time interval the joint distribution of the returns ,†¦, is the same as the joint distribution of returns ,†¦,. The Augmented Dickey-Fuller test, in turn, will also be used to test whether time-series models are accurately to examine the stationary of statistical properties of the return. 3.2.1.4. Serial independence assumption There are a large number of tests of randomness of the sample data. Autocorrelation plots are one common method test for randomness. Autocorrelation is the correlation between the returns at the different points in time. It is the same as calculating the correlation between two different time series, except that the same time series is used twice once in its original form and once lagged one or more time periods. The results can range from  +1 to -1. An autocorrelation of  +1 represents perfect positive correlation (i.e. an increase seen in one time series will lead to a proportionate increase in the other time series), while a value of -1 represents perfect negative correlation (i.e. an increase seen in one time series results in a proportionate decrease in the other time series). In terms of econometrics, the autocorrelation plot will be examined based on the Ljung-Box Q statistic test. However, instead of testing randomness at each distinct lag, it tests the overall randomness based on a number of lags. The Ljung-Box test can be defined as: where n is the sample size,is the sample autocorrelation at lag j, and h is the number of lags being tested. The hypothesis of randomness is rejected if whereis the percent point function of the Chi-square distribution and the ÃŽ ± is the quantile of the Chi-square distribution with h degrees of freedom. 3.2.2. Data Characteristics Table 3.1 gives the descriptive statistics for the FTSE 100 and the SP 500 daily stock market prices and returns. Daily returns are computed as logarithmic price relatives: Rt = ln(Pt/pt-1), where Pt is the closing daily price at time t. Figures 3.5a and 3.5b, 3.6a and 3.6b present the plots of returns and price index over time. Besides, Figures 3.7a and 3.7b, 3.8a and 3.8b illustrate the combination between the frequency distribution of the FTSE 100 and the SP 500 daily return data and a normal distribution curve imposed, spanning from 05/06/2002 through 22/06/2009. Table 3.1: Diagnostics table of statistical characteristics on the returns of the FTSE 100 Index and SP 500 index between 05/06/2002 and 22/6/2009. DIAGNOSTICS SP 500 FTSE 100 Number of observations 1774 1781 Largest return 10.96% 9.38% Smallest return -9.47% -9.26% Mean return -0.0001 -0.0001 Variance 0.0002 0.0002 Standard Deviation 0.0144 0.0141 Skewness -0.1267 -0.0978 Excess Kurtosis 9.2431 7.0322 Jarque-Bera 694.485*** 2298.153*** Augmented Dickey-Fuller (ADF) 2 -37.6418 -45.5849 Q(12) 20.0983* Autocorre: 0.04 93.3161*** Autocorre: 0.03 Q2 (12) 1348.2*** Autocorre: 0.28 1536.6*** Autocorre: 0.25 The ratio of SD/mean 144 141 Note: 1. *, **, and *** denote significance at the 10%, 5%, and 1% levels, respectively. 2. 95% critical value for the augmented Dickey-Fuller statistic = -3.4158 Figure 3.5a: The FTSE 100 daily returns from 05/06/2002 to 22/06/2009 Figure 3.5b: The SP 500 daily returns from 05/06/2002 to 22/06/2009 Figure 3.6a: The FTSE 100 daily closing prices from 05/06/2002 to 22/06/2009 Figure 3.6b: The SP 500 daily closing prices from 05/06/2002 to 22/06/2009 Figure 3.7a: Histogram showing the FTSE 100 daily returns combined with a normal distribution curve, spanning from 05/06/2002 through 22/06/2009 Figure 3.7b: Histogram showing the SP 500 daily returns combined with a normal distribution curve, spanning from 05/06/2002 through 22/06/2009 Figure 3.8a: Diagram showing the FTSE 100’ frequency distribution combined with a normal distribution curve, spanning from 05/06/2002 through 22/06/2009 Figure 3.8b: Diagram showing the SP 500’ frequency distribution combined with a normal distribution curve, spanning from 05/06/2002 through 22/06/2009 The Table 3.1 shows that the FTSE 100 and the SP 500 average daily return are approximately 0 percent, or at least very small compared to the sample standard deviation (the standard deviation is 141 and 144 times more than the size of the average return for the FTSE 100 and SP 500, respectively). This is why the mean is often set at zero when modelling daily portfolio returns, which reduces the uncertainty and imprecision of the estimates. In addition, large standard deviation compared to the mean supports the evidence that daily changes are dominated by randomness and small mean can be disregarded in risk measure estimates. Moreover, the paper also employes five statistics which often used in analysing data, including Skewness, Kurtosis, Jarque-Bera, Augmented Dickey-Fuller (ADF) and Ljung-Box test to examining the empirical full period, crossing from 05/06/2002 through 22/06/2009. Figure 3.7a and 3.7b demonstrate the histogram of the FTSE 100 and the SP 500 daily return data with the normal distribution imposed. The distribution of both the indexes has longer, fatter tails and higher probabilities for extreme events than for the normal distribution, in particular on the negative side (negative skewness implying that the distribution has a long left tail). Fatter negative tails mean a higher probability of large losses than the normal distribution would suggest. It is more peaked around its mean than the normal distribution, Indeed, the value for kurtosis is very high (10 and 12 for the FTSE 100 and the SP 500, respectively compared to 3 of the normal distribution) (also see Figures 3.8a and 3.8b for more details). In other words, the most prominent deviation from the normal distributional assumption is the kurtosis, which can be seen from the middle bars of the histogram rising above the normal distribution. Moreover, it is obvious that outliers still exist, which indicates that excess kurtosis is still present. The Jarque-Bera test rejects normality of returns at the 1% level of significance for both the indexes. So, the samples have all financial characteristics: volatility clustering and leptokurtosis. Besides that, the daily returns for both the indexes (presented in Figure 3.5a and 3.5b) reveal that volatility occurs in bursts; particularly the returns were very volatile at the beginning of examined period from June 2002 to the middle of June 2003. After remaining stable for about 4 years, the returns of the two well-known stock indexes in the world were highly volatile from July 2007 (when the credit crunch was about to begin) and even dramatically peaked since July 2008 to the end of June 2009. Generally, there are two recognised characteristics of the collected daily data. First, extreme outcomes occur more often and are larger than that predicted by the normal distribution (fat tails). Second, the size of market movements is not constant over time (conditional volatility). In terms of stationary, the Augmented Dickey-Fuller is adopted for the unit root test. The null hypothesis of this test is that there is a unit root (the time series is non-stationary). The alternative hypothesis is that the time series is stationary. If the null hypothesis is rejected, it means that the series is a stationary time series. In this thesis, the paper employs the ADF unit root test including an intercept and a trend term on return. The results from the ADF tests indicate that the test statistis for the FTSE 100 and the SP 500 is -45.5849 and -37.6418, respectively. Such values are significantly less than the 95% critical value for the augmented Dickey-Fuller statistic (-3.4158). Therefore, we can reject the unit root null hypothesis and sum up that the daily return series is robustly stationary. Finally, Table 3.1 shows the Ljung-Box test statistics for serial correlation of the return and squared return series for k = 12 lags, denoted by Q(k) and Q2(k), respectively. The Q(12) statistic is statistically significant implying the present of serial correlation in the FTSE 100 and the SP 500 daily return series (first moment dependencies). In other words, the return series exhibit linear dependence. Figure 3.9a: Autocorrelations of the FTSE 100 daily returns for Lags 1 through 100, covering 05/06/2002 to 22/06/2009. Figure 3.9b: Autocorrelations of the SP 500 daily returns for Lags 1 through 100, covering 05/06/2002 to 22/06/2009. Figures 3.9a and 3.9b and the autocorrelation coefficient (presented in Table 3.1) tell that the FTSE 100 and the SP 500 daily return did not display any systematic pattern and the returns have very little autocorrelations. According to Christoffersen (2003), in this situation we can write: Corr(Rt+1,Rt+1-ÃŽ ») ≈ 0, for ÃŽ » = 1,2,3†¦, 100 Therefore, returns are almost impossible to predict from their own past. One note is that since the mean of daily returns for both the indexes (-0.0001) is not significantly different from zero, and therefore, the variances of the return series are measured by squared returns. The Ljung-Box Q2 test statistic for the squared returns is much higher, indicating the presence of serial correlation in the squared return series. Figures 3.10a and 3.10b) and the autocorrelation coefficient (presented in Table 3.1) also confirm the autocorrelations in squared returns (variances) for the FTSE 100 and the SP 500 data, and more importantly, variance displays positive correlation with its own past, especially with short lags. Corr(R2t+1,R2t+1-ÃŽ ») > 0, for ÃŽ » = 1,2,3†¦, 100 Figure 3.10a: Autocorrelations of the FTSE 100 squared daily returns Figure 3.10b: Autocorrelations of the SP 500 squared daily returns 3.3. Calculation of Value At Risk The section puts much emphasis on how to calculate VaR figures for both single return indexes from proposed models, including the Historical Simulation, the Riskmetrics, the Normal-GARCH(1,1) (or N-GARCH(1,1)) and the Student-t GARCH(1,1) (or t-GARCH(1,1)) model. Except the historical simulation model which does not make any assumptions about the shape of the distribution of the assets returns, the other ones commonly have been studied under the assumption that the returns are normally distributed. Based on the previous section relating to the examining data, this assumption is rejected because observed extreme outcomes of the both single index returns occur more often and are larger than predicted by the normal distribution. Also, the volatility tends to change through time and periods of high and low volatility tend to cluster together. Consequently, the four proposed VaR models under the normal distribution either have particular limitations or unrealistic. Specifically, the historical simulation significantly assumes that the historically simulated returns are independently and identically distributed through time. Unfortunately, this assumption is impractical due to the volatility clustering of the empirical data. Similarly, although the Riskmetrics tries to avoid relying on sample observations and make use of additional information contained in the assumed distribution function, its normally distributional assumption is also unrealistic from the results of examining the collected data. The normal-GARCH(1,1) model and the student-t GARCH(1,1) model, on the other hand, can capture the fat tails and volatility clustering which occur in the observed financial time series data, but their returns standard distributional assumption is also impossible comparing to the empirical data. Despite all these, the thesis still uses the four models under the standard distributional assumption of returns to comparing and evaluating their estimated results with the predicted results based on the student distributional assumption of returns. Besides, since the empirical data experiences fatter tails more than that of the normal distribution, the essay intentionally employs the Cornish-Fisher Expansion technique to correct the z-value from the normal distribution to account for fatter tails, and then compare these results with the two results above. Therefore, in this chapter, we purposely calculate VaR by separating these three procedures into three different sections and final results will be discussed in length in chapter 4. 3.3.1. Components of VaR measures Throughout the analysis, a holding period of one-trading day will be used. For the significance level, various values for the left tail probability level will be considered, ranging from the very conservative level of 1 percent to the mid of 2.5 percent and to the less cautious 5 percent. The various VaR models will be estimated using the historical data of the two single return index samples, stretches from 05/06/2002 through 31/07/2007 (consisting of 1305 and 1298 prices observations for the FTSE 100 and the SP 500, respectively) for making the parameter estimation, and from 01/08/2007 to 22/06/2009 for predicting VaRs and backtesting. One interesting point here is that since there are few previous empirical studies examining the performance of VaR models during periods of financial crisis, the paper deliberately backtest the validity of VaR models within the current global financial crisis from the beginning in August 2007. 3.3.2. Calculation of VaR 3.3.2.1. Non-parametric approach Historical Simulation As mentioned above, the historical simulation model pretends that the change in market factors from today to tomorrow will be the same as it was some time ago, and therefore, it is computed based on the historical returns distribution. Consequently, we separate this non-parametric approach into a section. The chapter 2 has proved that calculating VaR using the historical simulation model is not mathematically complex since the measure only requires a rational period of historical data. Thus, the first task is to obtain an adequate historical time series for simulating. There are many previous studies presenting that predicted results of the model are relatively reliable once the window length of data used for simulating daily VaRs is not shorter than 1000 observed days. In this sense, the study will be based on a sliding window of the previous 1305 and 1298 prices observations (1304 and 1297 returns observations) for the FTSE 100 and the SP 500, respectively, spanning from 05/06/2002 through 31/07/2007. We have selected this rather than larger windows is since adding more historical data means adding older historical data which could be irrelevant to the future development of the returns indexes. After sorting in ascending order the past returns attributed to equally spaced classes, the predicted VaRs are determined as that log-return lies on the target percentile, say, in the thesis is on three widely percentiles of 1%, 2.5% and 5% lower tail of the return distribution. The result is a frequency distribution of returns, which is displayed as a histogram, and shown in Figure 3.11a and 3.11b below. The vertical axis shows the number of days on which returns are attributed to the various classes. The red vertical lines in the histogram separate the lowest 1%, 2.5% and 5% returns from the remaining (99%, 97.5% and 95%) returns. For FTSE 100, since the histogram is drawn from 1304 daily returns, the 99%, 97.5% and 95% daily VaRs are approximately the 13th, 33rd and 65th lowest return in this dataset which are -3.2%, -2.28% and -1.67%, respectively and are roughly marked in the histogram by the red vertical lines. The interpretation is that the VaR gives a number such that there is, say, a 1% chance of losing more than 3.2% of the single asset value tomorrow (on 01st August 2007). The SP 500 VaR figures, on the other hand, are little bit smaller than that of the UK stock index with -2.74%, -2.03% and -1.53% corresponding to 99%, 97.5% and 95% confidence levels, respectively. Figure 3.11a: Histogram of daily returns of FTSE 100 between 05/06/2002 and 31/07/2007 Figure 3.11b: Histogram of daily returns of SP 500 between 05/06/2002 and 31/07/2007 Following predicted VaRs on the first day of the predicted period, we continuously calculate VaRs for the estimated period, covering from 01/08/2007 to 22/06/2009. The question is whether the proposed non-parametric model is accurately performed in the turbulent period will be discussed in length in the chapter 4. 3.3.2.2. Parametric approaches under the normal distributional assumption of returns This section presents how to calculate the daily VaRs using the parametric approaches, including the RiskMetrics, the normal-GARCH(1,1) and the student-t GARCH(1,1) under the standard distributional assumption of returns. The results and the validity of each model during the turbulent period will deeply be considered in the chapter 4. 3.3.2.2.1. The RiskMetrics Comparing to the historical simulation model, the RiskMetrics as discussed in the chapter 2 does not solely rely on sample observations; instead, they make use of additional information contained in the normal distribution function. All that needs is the current estimate of volatility. In this sense, we first calculate daily RiskMetrics variance for both the indexes, crossing the parameter estimated period from 05/06/2002 to 31/07/2007 based on the well-known RiskMetrics variance formula (2.9). Specifically, we had the fixed decay factor ÃŽ »=0.94 (the RiskMetrics system suggested using ÃŽ »=0.94 to forecast one-day volatility). Besides, the other parameters are easily calculated, for instance, and are the squared log-return and variance of the previous day, correspondingly. After calculating the daily variance, we continuously measure VaRs for the forecasting period from 01/08/2007 to 22/06/2009 under different confidence levels of 99%, 97.5% and 95% based on the normal VaR formula (2.6), where the critical z-value of the normal distribution at each significance level is simply computed using the Excel function NORMSINV. 3.3.2.2.2. The Normal-GARCH(1,1) model For GARCH models, the chapter 2 confirms that the most important point is to estimate the model parameters ,,. These parameters has to be calculated for numerically, using the method of maximum likelihood estimation (MLE). In fact, in order to do the MLE function, many previous studies efficiently use professional econometric softwares rather than handling the mathematical calculations. In the light of evidence, the normal-GARCH(1,1) is executed by using a well-known econometric tool, STATA, to estimate the model parameters (see Table 3.2 below). Table 3.2. The parameters statistics of the Normal-GARCH(1,1) model for the FTSE 100 and the SP 500 Normal-GARCH(1,1)* Parameters FTSE 100 SP 500 0.0955952 0.0555244 0.8907231 0.9289999 0.0000012 0.0000011 + 0.9863183 0.9845243 Number of Observations 1304 1297 Log likelihood 4401.63 4386.964 * Note: In this section, we report the results from the Normal-GARCH(1,1) model using the method of maximum likelihood, under the assumption that the errors conditionally follow the normal distribution with significance level of 5%. According to Table 3.2, the coefficients of the lagged squared returns () for both the indexes are positive, concluding that strong ARCH effects are apparent for both the financial markets. Also, the coefficients of lagged conditional variance () are significantly positive and less than one, indicating that the impact of ‘old’ news on volatility is significant. The magnitude of the coefficient, is especially high (around 0.89 – 0.93), indicating a long memory in the variance. The estimate of was 1.2E-06 for the FTSE 100 and 1.1E-06 for the SP 500 implying a long run standard deviation of daily market return of about 0.94% and 0.84%, respectively. The log-likehood for this model for both the indexes was 4401.63 and 4386.964 for the FTSE 100 and the SP 500, correspondingly. The Log likehood ratios rejected the hypothesis of normality very strongly. After calculating the model parameters, we begin measuring conditional variance (volatility) for the parameter estimated period, covering from 05/06/2002 to 31/07/2007 based on the conditional variance formula (2.11), where and are the squared log-return and conditional variance of the previous day, respectively. We then measure predicted daily VaRs for the forecasting period from 01/08/2007 to 22/06/2009 under confidence levels of 99%, 97.5% and 95% using the normal VaR formula (2.6). Again, the critical z-value of the normal distribution under significance levels of 1%, 2.5% and 5% is purely computed using the Excel function NORMSINV. 3.3.2.2.3. The Student-t GARCH(1,1) model Different from the Normal-GARCH(1,1) approach, the model assumes that the volatility (or the errors of the returns) follows the Student-t distribution. In fact, many previous studies suggested that using the symmetric GARCH(1,1) model with the volatility following the Student-t distribution is more accurate than with that of the Normal distribution when examining financial time series. Accordingly, the paper additionally employs the Student-t GARCH(1,1) approach to measure VaRs. In this section, we use this model under the normal distributional assumption of returns. First is to estimate the model parameters using the method of maximum likelihood estimation and obtained by the STATA (see Table 3.3). Table 3.3. The parameters statistics of the Student-t GARCH(1,1) model for the FTSE 100 and the SP 500 Student-t GARCH(1,1)* Parameters FTSE 100 SP 500 0.0926120 0.0569293 0.8946485 0.9354794 0.0000011 0.0000006 + 0.9872605 0.9924087 Number of Observations 1304 1297 Log likelihood 4406.50 4399.24 * Note: In this section, we report the results from the Student-t GARCH(1,1) model using the method of maximum likelihood, under the assumption that the errors conditionally follow the student distribution with significance level of 5%. The Table 3.3 also identifies the same characteristics of the student-t GARCH(1,1) model parameters comparing to the normal-GARCH(1,1) approach. Specifically, the results of , expose that there were evidently strong ARCH effects occurred on the UK and US financial markets during the parameter estimated period, crossing from 05/06/2002 to 31/07/2007. Moreover, as Floros (2008) mentioned, there was also the considerable impact of ‘old’ news on volatility as well as a long memory in the variance. We at that time follow the similar steps as calculating VaRs using the normal-GARCH(1,1) model. 3.3.2.3. Parametric approaches under the normal distributional assumption of returns modified by the Cornish-Fisher Expansion technique The section 3.3.2.2 measured the VaRs using the parametric approaches under the assumption that the returns are normally distributed. Regardless of their results and performance, it is clearly that this assumption is impractical since the fact that the collected empirical data experiences fatter tails more than that of the normal distribution. Consequently, in this section the study intentionally employs the Cornish-Fisher Expansion (CFE) technique to correct the z-value from the assumption of the normal distribution to significantly account for fatter tails. Again, the question of whether the proposed models achieved powerfully within the recent damage time will be assessed in length in the chapter 4. 3.3.2.3.1. The CFE-modified RiskMetrics Similar