President and oil production,transportation, and emergency services that

President Obama once stated as he address a young vibrate crowd ofStanford students that, “It is now clear that cyber threat is one of the mostserious economic and national security challenges we face as a nation,”Obama would continue his strong words by stating “We know that cyberintruders have probed our electrical grid, and that in other countries cyberattacks have plunged entire cities into darkness”(Obama, 2015). In otherwords, when the president of the United States puts this much emphasisinto a subject it shows how important it is and how big of an impact it couldhave on the nation. The Information Age has had an everlasting impact onnot only the United States but globally as well. As the United States overthe last 40 years have been very dependent on digital networks due to thefact that almost everything is controlled by computers. Terroristorganizations such like Al Qaeda, ISIS and hostile countries are aware ofthis and an attack on our networks could be crippling to our economy. Itwould bring the most powerful and influential nation to a standstill. This iswhy president Obama has expressed the need for increased securitymeasures because a cyber war is a reality that could happen soon that wemust be prepared for. The 2016 United States Presidential election is aprime example as to what happened when we allowed cyberterrorism to goun-detected and the rippling consequences we face from that mistake. Butwhat is Cyberterrorism? How can you address a problem that mostAmerican do not know is a problem? And most importantly how do you findsolutions so that greater generations do not bear the burden of ourmistakes. This essay attempts to answer these questions and draw aconclusion on ways of combating cyberterrorism.What is Cyberterrorism? Lately, this term has been constantly seen inthe media and news outlets. Cyber terrorism , is defined as the use of digitalequipment to bring down a country by tapping into its computer basedprograms and dismantling its infrastructure which includes but not limitedto: banking networks, air traffic control systems, gas and oil production,transportation, and emergency services that all rely on computer networksto function (Conklin and Shoemaker,2011). But many do not understand theconcept of this word and its effect on our way of life. With the advancementof technology, cybercrime will become the leading threat to the safety andsecurity of the American people. Experts on cybercrime agree thatcybercrime is an issue that needs to be focused on more in-depth becausethe wide-spread use of computers by the global economy has made the useof computers and internet vital to everyday life (Siegel, 2015). Althoughthey are 3 major types of cybercrime: cyber fraud, cyber vandalism, andcyber terrorism (Seigel, 2015) the more dangerous and most threating ofthem is cyber terrorism. Since the World Trade Center attacks onSeptember 11 the word ‘Terrorism’ became the campaign slogan forNationalism.First of all, the thing that makes cyber terrorism intrinsically differentfrom standard terrorism attacks is the fact that it is barrier proof andterrorists can bypass the American law because they can operate incountries that do not have laws against cyber terrorism (Siegel, 2015). TheFederal Bureau Investigation (FBI) states that “Cyber terrorists can also usethe internet as a supplement to regular attacks because they can obtaininformation about target countries”. When a person thinks of the differenttypes of warfare they would probably think of the more common types seenin the past wars like guerilla warfare, chemical warfare, and biologicalwarfare. But in the present day and age a new warfare has emerged. It isthe cyber warfare. This type of warfare is when an outside power maychoose to attack the network or power grid with means of harmful intent.An example of this can be seen when hackers released a series of cyberattackson Brazil’s power grid and left cities and parts of the countrypowerless for up to a few days.In result to this, Brazil lost millions upon millions of dollars and itseconomy was at a standstill. Now imagine if such a situation occurredwithin the United States the devastation it would cost economically anddomestically would be record breaking. An incident that happened to theUnited States that helped make the security of networks front and centerwas in 2007 when the theft of terabytes of information was taken from theDepartment of Defense, The Department of State, the Department ofCommerce, and the Department of Energy and Nasa because of a breach inby a foreign power (FBI). The FBI and other government agencies havemade efforts to combat the numerous attempts made to breach. Thisfollowed by another intrusion of the CENTCOM network which is used bythe Department of Defense to relay information for actions in the warsAmerica is currently in. This attack was believed to be a backdoor attackthrough corrupted memory sticks and thumbnail drives. Just as technologyhas increased so have the capabilities of perpetrators who are committingthe crimes.Both Conklin and Shoemaker even agrees that it is America’s hugedependence on the cyber world for its critical life support functions thatmakes it severely vulnerable to succumbing to an unprecedented electronicattack. Just like a traditional attack, Cyber terrorism can lead to death orbodily injury, explosions, plane crashes, water contamination, or severeeconomic loss (Conklin and Shoemaker,2011). In other words, we do notsee how much technology is growing, and how much we have come as asociety to depend on its efficiency. With the increase in our usage of all thistechnology, it has even cyberattacks an opportunity to expend their terror.Traditionally terrorist acts target a specific location and are executedprecisely in this spot. The limit of technology helped made it a lot easier todetect some if not most domestic and international terror plots that unfoldwithin our Nation. This has been a limit of the damage inflicted upon thosethe perpetrator hopes to influence and the general public.A prime example of this is the terrorist group named ISIS. ISIS wasable to use the advancement of technology to consistently cause havocWorldwide. But most of their threats are to the American government andwith the success of technology they are able to influence a massiveaudience and engage in more terror activities. This playing field has grownenormously to what could be conceived as boundless proportions.”Individuals or groups can now use Cyberspace to threaten Internationalgovernments, or terrorize the citizens of a country” (Conklin andShoemaker,2011). The creation of a boundless area of attack makes it thatmuch harder to determine where an act will be taken. Since it is easy tofigure out that for cyber-terrorism to occur computers need to be accessibleto the groups or individuals committing acts, why not restrict who can usecomputers? This has actually been considered but would be rather difficultto do in today’s world. America depends on computers a lot more thanbefore. We are not alone in this dependency, more and more of globalbusiness and personal activities are conducted via the Internet.The FBI is making valuable efforts to ensure Cyberterrorism islimited. Chairman Carper of the FBI states “to counter the threats we face,we are engaging in an unprecedented level of collaboration within the U.Sgovernment, with the private sector, and with international lawenforcement” (FBI). In other words, they have formed into one coalitionagainst one common enemy. Before this threat was as serious, all three ofthese sectors would address the problem of Cyberterrorism and Cybercrimeseparately and not together. But instead all three have come together tohelp one another succeed on putting an end towards cyberterrorism. Whilethere are attempts to resolve the problems that Cyberterrorism poses. Someof those efforts have led to either a small decrease in the activities orperpetrator adapting and finding another route to achieve their goal. ArditFerizi is a prime example of how cooperating may benefit againstcyberterrorism.In October 2015, for the first time, the US Justice Department hascharged a suspect for terrorism and hacking. The US Government hascharged a hacker in Malaysia with stealing the data belonging to the USservice members and passing it to the members of the ISIS with the intentto support them in arranging attacks against Western targets. The supremecourt case United States of America V. Ardit Ferizi. Ardit, commonlyreferred to as ‘ Th3Dir3ctorY’ was a citizen of Kosovo. Ardit was sentenceto 20 years in prison for providing material support to the Islamic State ofIraq and the Levant (ISIL) terrorist organization, and accessing a protectedcomputer without authorization and obtaining information in order toprovide material support to ISIL. Ferizi detained by Malaysian authoritieson a provisional arrest warrant on behalf of the U.S., was charged bycriminal complaint on Oct. 6, 2015. The criminal complaint was unsealed onOct. 15, 2015. Ferizi then agreed to extradition and plead guilty on June 15.According to court documents, Ferizi admitted that on or about June13, 2015, he gained system administrator-level access to a server thathosted the website of a U.S. victim company. The website containeddatabases with personally identifiable information (PII) belonging to tens ofthousands of the victim company’s customers, including members of themilitary and other government personnel. Ferizi subsequently culled the PIIbelonging to U.S. military members and other government personnel, whichtotaled approximately 1,300 individuals. That same day, on June 13, Feriziprovided the PII belonging to the 1,300 U.S. military members andgovernment personnel to Junaid Hussain, a now-deceased ISIL recruiter andattack facilitator. Ferizi and Hussain discussed publishing the PII of those1,300 victims in a hit list(Knappman,2016). Hussain posted a tweet thatcontained a document with the PII of the approximately 1,300 U.S. militaryand other government personnel that Ferizi had taken from the victimcompany and provided to Hussain. This is a perfect example as to how boththe severity of taking care of cybersecurity and also finding alternatives tosolve the problem (Knappman, 2016).This case helped for the first time showcase how real and dangerousnational security cyber threat that result from the combination of terrorismand hacking. Denning stated “This was a wake-up call not only to those ofus in law enforcement, but also to those in private industry” (Denning,).This successful prosecution also sends a message to those around the worldthat, if you provide material support to designated foreign terroristorganizations and assist them with their deadly attack planning, you willhave nowhere to hide. This case demonstrated how the United Statesgovernment will reach half-way around the world if necessary to holdaccountable those who engage in this type of activity. It was the case thathelps set a more serious standpoint towards cyberterrorism and howdamaging it can result into. It was the complete cooperation of multiplesectors that helped with this case. This proves that the most effective way ofcombating cyberterrorism is working together rather than separate. Withcollaboration of the private sector with law enforcement it resulted into agroundbreaking verdict that benefited efforts to stop cyber terrorism andcybercrime.Another landmark case that ensure the protection of cybercrime wasSherya Singhal V. Union of India. It re examined the constitutional validityof “Section 66A of the Information Technology Act, 2000 and its variousparameters from the perspective of the various principles enshrined in theIndian Constitution”(Knappman,2016). The court declared that sectionunconstitutional reiterating the principle that any provision of law,concerning the real well as virtual world, will have to ensure compliancewith the Indian Constitution. With the rapidly movement of technology Indiaand the United States have ensured that they will take a prominent efforttowards cybersecurity. The Singhal decision helped develop policies thatprotected individuals worldwide against preparators using encryptionthrough computer resources and mobile applications such like WhatsApp.Cyberterrorism has countless negative effects to put one specificproblem over another would not solve the problem. Whether an individual isa direct or indirect victim of a cyber terror attack, he or she can be leftgroping in the dark if no prior preparations impacts of a cyberattack arepretty much the same as any other terror attack on an organization.Although fighting against a highly sophisticated and intelligent cyberterrorism can be a no win situation, it can minimized or even stopped withproper technology, experts, and the willingness to respond to it. However,there are things that can be do not, and with the help of cases similar toUnited States of America V. Ardit Ferizi change is slowly coming. With thecollation group of the American government, Private sectors, and lawenforcement cyber security on the verge of being something we be dealingwith for the next few decade, But with a stronger more greater purposethan before.Reference Page.Conklin, Arthur and Shoemaker (2011). Cybersecurity: The Essential BodyOf Knowledge.MA: Boston.Denning, Dorothy E. (2000).Cyberterrorism. Georgetown Press, MD:Georgetown University.Siegel, Stanley G.(2015)Enterprise Cybersecurity: How to Build aSuccessful Cyber defenseProgram Against Advanced Threats. Hougnton Mifflin Harcourt. MA:Boston.Welcome to (2016, April 24). Retrieved November 1, 2017, from Edward W. (2016) Great American Trails. A New EnglandPublishing Associates Book. MI: Detroit.

On the manipulative game, I’ll play it, too,’

On the day of September 11, 2001, buildings crumbled, and thousands died as planes crashed into the twin towers in New York City. The United States saw the epitome of evil for the first time, and succeeding the atrocious acts, entered a new time – one in which it was imperative to modify everyone’s daily lives for the wellbeing of the nation. The United States now exists in a time when the safety of the people is a primary factor in how people live. This controversy has provoked the debate of the security of the nation versus the liberties the nation holds sacred. In an effort to preserve vigilance and ensure the safety of the people from the wrath of terrorism, the nation has had to sacrifice freedom for safety even though it goes against a principle of the Constitution. H.L. Mencken says, “the average man does not want to be free. He simply wants to be safe.” This quote is not qualified because man wants both to be free and to have safety; however, it is impossible for both total freedom and total safety to coexist. Like the quote, the average man does want to be safe, physically and socially. The recent raging wildfires in Southern California are perfect examples of how average people want basic physical safety. With so many evacuation orders in place, people are fleeing their affected neighborhoods in panic. Most people, once they get evacuation orders, pack up some of their most important things if they have time, and flee their homes, hoping that firefighters can save them. Everyone leaves looking for safety, whether it be at a family or friend’s place or one of the safety evacuation centers that have been set up. As well as physical safety, people also want to be safe socially. In “Two Ways to Belong in America,” by Bharati Mukherjee, one of the two sisters states that, “‘if America wants to play the manipulative game, I’ll play it, too,’ she snapped. ‘I’ll become a U.S. citizen for now then change back to India when I’m ready to go home,'” (Mukherjee 282). The sister seeks safety by trying to socially assimilate herself into the American culture.  In a new environment, it is not implausible that one would try to fit into their surroundings in order to feel accepted. Without this social acceptance, one cannot get a job to support oneself or may be threatened by those who hold racial stereotypes. By choosing to give up the freedom to express who she truly is, the sister instead chooses the safety of being accepted into her new culture. People want safety, whether that be in the most basic physical sense or in larger crises.Unlike the quote states, however, the average man does also want to be free, especially intellectually. In “The American Scholar,” by Ralph Waldo Emerson, Emerson believes that the scholars of America are stuck in a barring, conventional style of writing and ideas by stating, “free should the scholar be,–free and brave. Free even to the definition of freedom, ‘without any hindrance that does not arise out of his own constitution,'” (Emerson 8). He promotes non-conformity, self-reliance, and anti-institutionalism so that original ideas could be encouraged. Emerson implements a list of duties that a scholar should grasp in order to attain a distinct American intellect. Some duties he mentions are self-sacrifice and that a scholar must remain true to themselves, even in the face of public persecution. Emerson argues that scholars should have the freedom to write in any style they wish because they have the freedom to do that now, unlike when they were in Europe and had virtually no freedom, causing their ultimate move to America. Emerson supports the idea of an average man wanting freedom.  Although man seeks to have both total freedom and total safety, these two ideals cannot exist together; one has to be sacrificed in order to have the other. This conflict is exhibited in The Crucible by Arthur Miller. In the play, the citizens of the town are caught up in this conflict of whether they choose freedom or safety. Considering that The Crucible is an allegory for the Red Scare, the citizens exemplify the definition of hysteria and having to choose safety or freedom because they cannot have both at the same time. In that time period, in order to stay safe and not be accused of being a witch, the person would have to sacrifice their freedom because they would not be able to express their true feelings without the fear of being accused. Unlike the preponderance of citizens choosing safety over freedom, John Proctor, the play’s tragic hero, chooses freedom by being upright and honest about his ideas, knowing that he could be killed because his individualistic ideas do not match up the frenzied ideas of the citizens. Because the citizens sacrificed their freedom for their safety, and John Proctor sacrificed his safety for freedom, it substantiates the fact that someone cannot have total freedom and total safety at the same time. For example, Danforth, the presiding judge of the trials, says, “you must understand, sir, that a person is either with this court or he must be counted against it, there be no road between. This is a sharp time, now, a precise time—we live no longer in the dusky afternoon when evil mixed itself with good and befuddled the world,” showing that someone can either choose to have safety by following the norm of the society or have freedom by going against the ideas and exemplifying that there is no inbetween of the two- it is either one or the other (Miller 87). Freedom must be sacrificed for safety and safety must be sacrificed for freedom, displaying that one cannot exist with the other.The United States saw the epitome of evil for the first time, and following the atrocious acts of 9/11, entered a new time – one in which it was imperative to modify everyone’s daily lives for the sake of the safety of the nation. This issue has triggered the often-controversial debate of the security of the nation versus the liberties the nation holds in its Constitution. In an effort to maintain vigilance and ensure the safety of the people from the wrath of terrorism, the nation has had to sacrifice freedom for safety even though it goes against the Constitution. H.L. Mencken says, “the average man does not want to be free. He simply wants to be safe.” This quote is not qualified because man does want to be free and have safety, but one cannot be totally free or totally safe because one cannot exist with the other. Works CitedEmerson, Ralph Waldo. “The American Scholar.” Emerson on American Scholar,, Arthur. The Crucible: A Play in Four Acts. Penguin Books, 2003.Mukherjee, Bharati. “Two Ways to Belong in America.” Two Ways to Belong in America,

BY age) is one of these reasons.wehn recruiting

BY looking at how are aissez-faire economy, psychedelic drugs, what the media and shifting pictures portray , and and our government.We can observe Akers’ social eruditeness principle and the Marxist theory to our query, why do gangs form in our communities? Akers’ social studying concept explores how drugs and the media impact humans.Akers’ social learning explanation (of why something works or happens the manner it does) states that human beings analyze irregular behavior by means of watching/ noticing/  obeying and learning from the social elements of their daily live . Akers additionally talks about major thoughts of differential affiliation, fake, definitions, and differential reward It says that that movement will in all likelihood produce conduct that violates social and legal regular behaviors and the individual will verify to following the one’s behavior when they spend time with/ folks that expose them to mentally ill styles while the mentally ill sample is reinforced over  and yet again.Marxist’s explanation (of why something works or happens the manner it does) has (a system in which people personal cash and treasured things) the because the purpose of crime. Marxist explanation (of why something works or happens the way it does) states that ownership of the means of production by means of the (person who uses the money to make more money) ruling class produces a(community of humans/all desirable people in the world) that is largely and in the main criminogenic. the explanation (of why something works or takes place the manner it does) believes that the crimes dedicated are either, “crimes of trade (to assist someone)/place to live and sleep or crimes of resistance to the (character who uses the cash to make extra cash) gadget. when scratching the surface on what reasons gangs to form, (someone’s robust feeling that she or he should do sure matters to be able to be favored by using human beings of the same age) is one of these reasons.wehn recruiting  younger human beings to end up gangs’ participants it will start by way of peer pressuring them into becoming a member of a gang with the aid of making all of it sound beautiful and extraordinary. money plays a big position inside the beauty component whilst recruited new gang participants for instance youngsters 6-10-12 months-vintage youngster who isn’t always officially part of the group ought to make $2 hundred to $four hundred whilst operating a small part-time activity for the gang. although these are critical elements they’re no longer effective sufficient to make kids at that age or older to do things that are in opposition to their morals .one of the methods that kid’s morals could be misled into accepting gang violence is by way of the usage of or having the have an impact on of television and movies. that is an example of the social learning clarification (of why something works or happens the manner it does). a median child spends more time searching at a tv than she/he spends on their homework or class. when you consider that nobody can completely control what is going on in a kids mind then they should be learning something whilst watching their favored shows, movies, or cartoons. little or no of tv indicates are academic on TVs nowadays so many other negatively affecting thoughts are being soaked up (like a towel) for the duration of this time period. indicates on tv today are very violent and shown from a gang’s or gang’s member’s (way of seeing matters/sensible view of what’s and isn’t vital). A mature adult can tell how badly the situation the gangs are dwelling in. but, a baby will take that the life of violent gang as ideal. ‘The Ends offers a good reason for the method’ is taught through shows by way of some method where the “accurate person” defeats the “awful individual” via violence and is going to be punished. As a toddler, the kid thinks this is perfectly ideal because he knows that the “awful individual” become incorrect but has no concept of what suitable punishments there are. Blood and guts in tv also take a massive element in influencing young minds. youngsters who watch bloody scenes are interested/hypnotized by way of someday they have got by no means been exposed to before. Older  visitors who watch bloody scenes and are not involved approximately the blood but rather how a lot ache the victim feels. A younger mind would not make this connection. So, a love of bloody things comes turns into seen, and this has been visible in(extra than two, but not a whole lot of) of my friends. unluckily, kids who develop up looking this stuff are more likely to possibly to grow up with a stronger popularity of becoming a violent gang member or ‘violent-acceptant’ person. “Gangs bring the irresponsible/unpaid ordinary behaviors of (community of humans/all desirable human beings inside the international) into intimate touch with the man or woman.”1, (Marshall B Clinard, 1963). So the stop result, tv leads a infant to assume and believing that violence is acceptable and in no way wrong. this can probable make its way right into a gang state of affairs. that is the maximum vital while mother and father do not spend time with their youngsters explaining what is right and what is wrong or proper this is being displayed on tv, and these days it is now not unexpected that more recent books and varieties of tune will inspire this form of idea and ideas. once a kid or youngster (related to the thoughts and mind) writes those ideas into their head they are able to without problems be more accepting of the concept of becoming a member of a gang in the event that they run into any circle of relatives or outside troubles. for example, kids who do not have their mother and father round as tons will frequently feel starved of affection. dad and mom feel that food at the table is sufficient to show their love for his or her toddler and are frequently busy working because of this. kids who are in the ones styles of households are much more likely to join a gang first out of boredom and feel like they have the sense of belonging that did not exist with their circle of relatives. As time passes a form of circle of relatives relationship, love or relationship might also expand between the participants of the gang and the child. it is whilst the bond controls how the child thinks about the crowd and the way it has taken the region in their family. the new unfriendly and grouchy structure of towns also impacts the convenience in which a boy/woman can join a gang. ” The (introduction and creation/ group of gadgets) of gangs in cities, and modern day in suburbs is helped by the identical loss of network among dad and mom. The parents do not know what their youngsters are doing for 2 motives: First, lots of the parents’ lives are outdoor the local community, whilst the kid’s lives are lived nearly definitely inside it. 2nd, in a fully advanced community, the community of relations provide every discern, in a experience, a community of guards/lookouts who can keep him knowledgeable of his child’s activities. In present day residing-places (city or suburban), in which such a community is lessened, he now not has such guards/lookouts.”2, (Merton Nisbet, 1971). In male gangs, problems manifest while a male attempts to one-up the alternative to peer which one is the manliest This ends in gangs contributors collaborating in a “one-upmanship wherein each member tries to devote the bigger and extra violent crime or without a doubt more crimes than the others will. Having individuals take part in this sort idea consequences in a in no way-ending unorganized violence spree (A type of Clockwork Orange way of wondering or in other phrases a (whilst A reasons B, which causes C, and so forth.)In gangs with smarter members, those feelings emerge as seeking to be the superstar of a crime or turn out to be the alpha of the crowd This makes the group a good deal sharp/keen/ properly-evolved and organized whilst doing/appearing against the law. This sort of gang discovered extra inside the middle or higher-magnificence humans but can appear in gangs within the low rent districts too. This “one-upmanship” is the main motive of why rival gangs combat. Many gangs feel effective, and they need to be feared through the encircling humans. To do this they are trying to take down as many gangs as they can of their neighborhoods they may be the handiest ones left. After the anxiety between gangs fights begin to form and gang murders happen. while two gangs are in battle makes life very dangerous for people (who lawfully stay in a country, country, and many others.) who step in between them in any way in the location. much less than 40% of pressure-with the aid of kills their supposed victim yet over 60% kill a person. lastly one of the best elements for joining a gang is for safety. although from a one-of-a-kind factor of view, we will virtually see that becoming a member of a gang can bring more horrific than properly, however that not always genuine for children who stay in crowded, grimy neighborhoods together with the Bronx or the very worst case, Compton, children may be crushed and robbed if they do no longer join a gang. the gang additionally presents money for these youngsters which will win the accept as true with and loyalty for the reason that money feeds the family. The reason children suppose that the crowd they join will or can hold them safe is that of the (talk or statistics that tries to exchange humans’s minds) the gangs present or display. this is a sort of (talk or statistics that attempts to alternate people’s minds) can is shown while a gang member says that if all people hurts the kid they’ll take revenge on who did them wrong. human beings in low hire regions are most usually being ignored because of poorness and race. This effects in a (point of view/manner of behaving) that (offers a motive to do some thing) that individual to base their existence on doing what the system that badly mistreats them doesn’t need. even though this completes little it’s miles the main issue why gangs form and why humans join them. They then do/carry out crimes of resistance, which helps the Marxist rationalization of crime. with the intention to wrap all of it up gangs are products of (the fitness of the Earth/the encircling situations) we have created as a (network of human beings/all exact people within the global). some of those elements consist of terrible mistreatment, newspapers, websites, and television,greed, violence, and different gangs. There seems to be no end to the (introduction and creation/ group of gadgets) of gangs without totally rebuilding the contemporary day (technique of human beings making, promoting, and shopping for matters) and price gadget we put as human beings. because the danger of this modification is very (nearly nothing/very little), we should learn how to effectively cope with gang and try to keep them to a completely low fee. sadly, there may be no real prepared pressure to assist fight gangs. Of direction, the police are speculated to address those varieties of situations the police forces frequently show their growing incapacity to cope with these issues. What we need are extra people to shape businesses like the “GuardianAngels” a gang-like organization that makes lifestyles very hard for street gangs that are breaking legal guidelines.

TO that the grades earned by him in



It gives me immense pleasure to write this letter of
recommendation for the truly outstanding Vaibhav Anand. He had been in this
school since its inception. In my decade counseling students, it is very rare
to encounter an individual with as much self-awareness and compassion as
Vaibhav. When considering his distinctive qualities: resilience, spirit, and optimism
are chiefly striking given the unusual circumstances he has overcome in his
life. Bearing in mind the profundity of his experiences and his sophisticated
perception of life, Vaibhav possesses an extraordinary maturity beyond his
eighteen years.

Vaibhav has endured significant tragedies in his life. At a
very young age (eleven years old) he lost his mother due to meningitis but
still he performed exceptionally well in his studies in subsequent classes and
exhibited a profound interest in computers and technology; he would develop
virtual machines, code programs in C++, invent scientific operational models
and take pleasure in solving mathematics.

As a Science stream student, he is meticulous when
performing experiments and works patiently to successfully complete
experiments. Vaibhav is a self-motivated, dedicated, student of high intelligence
who can grasp difficult concepts, think critically and handle the rigor of a
competitive environment. He exhibits the qualities of a leader. He showed us
that achieving good grades was not that hard task for him and he has been
awarded with the Scholar badges for his academic excellence constantly since
the sixth-grade. In addition, he has also been an active competitor in various
Science and Mathematics Olympiads since sixth-grade. He had always represented
our school in multiple Scientific exhibitions with his innovative research experiments. He was the
topper in the grade 11 in the subject Mathematics.

However, during high school leaving / final exams
(twelfth-grade), Vaibhav’s father had committed suicide, which had affected Vaibhav’s
final twelfth-grade exams significantly. I strongly believe that the grades
earned by him in his final twelfth-grade exams do not evince his true
potential. His twelfth-grade middle term grades reflect his true potential.  Throughout these unstable and uncertain times,
Vaibhav diligently devoted himself to his education while balancing to manage
his distressed home situation, which essentially required Vaibhav to take a gap
year (after graduating from high school in May 2017) to look after his younger
brother. He has shouldered the responsibility for his younger brother along
with managing other essential household responsibilities simultaneously.

After keeping all that in mind, I believe Vaibhav is an
industrious young man with great fortitude to withstand such life challenging
situations. Based on the incredible resilience and infectious enthusiasm
Vaibhav has shown throughout the setbacks in his life, I have no doubt that he
will not only continue to deal with any obstacle that comes in his path but
also will thrive with great strength, grace, and an optimistic outlook.



Environment firms productivity downturn, decline in competition and

Environment and economic activity have been commonly thought as notions with
complicated trade-off between each other. Their interaction over decades was a subject
of the heated debates, thus no surprise this reflected in a various theoretical
and empirical studies. Especially they have escalated after the appearance of so
called “Porter Hypothesis” (PH), by Porter
(1991) and Porter and van der Linde
(1995), suggesting that stringent environmental regulation induce innovations
which in turn increase the overall competitiveness at the industry level
through superior productivity. Authors advocate that accurately constructed
environmental policies can bring benefits through process balancing –
substituting input resources, reducing production disruptions, usage of less
costly materials or better utilization of them, and product balancing – through
improvement in its performance or average quality, reducing costs by eliminating
expensive materials and shrinking in disposal costs. The central message is
that strict environmental policies can enhance productivity by triggering

Figure 1. Schematic
representation of the Porter Hypothesis

Source: Ambec et al. (2013) after Porter (1991)


As it confronted heavily the traditional view that environmental regulations
are those instruments that negatively affect productivity (Jaffe et al. (1995), Gray (1987), Barbera and McConnell (1990)) it
is also criticized in sense of undermining competitive abilities of the firms
to seek their profit-maximizing behaviour, thus to act rationally (Palmer et al. (1995)). The main argument
against is summarized in the economist’s well-known saying: “There are no free
lunches” (Sinclair-Desgagné 1991,
2). Means that, innovation itself are financially demanding, and while
computing opportunity costs with including stronger environmental regulations
on firms, it obviously leads to rise in production costs, which prevail their
revenue (eg. Sinclair-Desgagné (1991), Ambec et al. (2013)). To these
conclusions considerably contributed US industrial slowdown in 1970s. For
instance, Jaffe et al. (1995) suggested
that it is large abatement costs responsible for US firms productivity downturn, decline in competition and also in
charge for pushing companies to reallocate their manufacturing processes to
another countries. On the other hand, as stated by Ko?luk and Zipperer (2014), Porter
and van der Linde (1995) these older studies have considerable
identification issues, as are concentrated mainly on domestic effects and
undervalued environmentally friendly innovations, neglecting industry and
country-specific effects.

This literature review concentrates mainly on more recent empirical
works which are grouped according to the regions or industries covered, as results
are very controversial. For instance, the assumption that environmental
stringency might increase productivity received a lot of critics from
neoclassical viewers as it is difficult to assimilate all the relevant factors of
its arguments in the theoretical models (Broberg
et al., 2013). According to Smith
and Walsh (2000) there is a problem of methods used to identify the forces
to productivity adjustment, thus in practice it is complicated to reject Porter
Hypothesis. Suggesting that there are “no painless environmental policies” Smith and Walsh (2000,74) claim that
productivity is rather harmed by costly environmental regulations.

Prevailing empirical studies to some extent
are concentrated on one of the three variants of Porter Hypothesis defined
by Jaffe and Palmer, 1997: the “narrow”, “weak” or “strong”. The
“narrow” version suggests, that under specific types of environmental policies
innovation and productivity benefit. “Weak” says that regulations will trigger
a certain type of innovation as firms likely to invest in other activities
compared to those they could have done without new policy constraints. While
“strong” states that with a “kick” of a new tighter regulation, companies broaden
and optimize their decisions and processes what leads to increase in productivity.
The authors own empirical results go along with a “weak” form, also stating
that coefficients greatly differ across industries.

While analysing most studies in the field, Ko?luk and Zipperer (2014), found that consequences vary a lot,
however, with traditional measures at the plant-level, among regulated and
non-regulated plants, outcome is negative, but not very robust. The effect
depends significantly on the particular plant characteristics. The results from
the industry level show that early studies commonly view environmental
stringency policies as a negative burden for productivity, while more recent
ones have evidence of no or positive linkage between them.

As a cross-country analysis suffers from the lack of reliable and
comparable method of estimation, Botta
and Ko?luk (2014) constructed the proxy – environmental policy stringency
index (EPS), that includes aggregated and scored instruments, related to
climate and air pollution and allows for quantitative and qualitative measure
of environmental regulation tightness over a long-time period. They assume
stringency as the “cost” on an activity, which is damaging the environment. It
rises the opportunity costs of polluting, therefore, providing an incentive for
environment-friendly activity. The index ranges from 0 (not stringent) to 6
(the highest degree of stringency) and covers most OECD countries from 1990 to
2012. The indicators are divided into market-based (e.g. taxes on pollutants or
other environmentally harmful activities) and non-market-based instruments
(e.g. subsidies for environmental favourable activities). According to the
results, indicators showed positive and significant correlation with GDP as
well as with Environmental Performance Index (EPI) and negatively associated
with  emissions per unit of GDP in nominal and PPP
values. Besides, market-based instruments are significantly correlated with
Green Patent Index, what suggests that this type of instruments positively
affect the “green innovations”.

With a help of the same EPS index, Albrizio
et al. (2017) empirically tested the “strong” version of Porter Hypothesis –
if the stringent environmental policy induces productivity, using a panel of
OECD countries on multi-factor productivity (MFP) growth. For the estimation, authors
combined the industry and firm-level impacts and policy instruments, according
to their price mechanisms – market-based and non-market ones. The panel includes
11 OECD countries and 22 manufacturing sectors over the time period 2000-2009. To
track the effect, a three-year moving average has been selected for lagging EPS
in time for both levels – industry and firm. This moving average is also used as
an interaction term with the distance to the global frontier. At the industry
level, findings report that tightening in environmental policy have a positive
short-term effect on productivity growth in countries, where industries are
technologically developed. This impact fading with the increasing distance to
the global frontier and becomes not significant far from it. The overall
estimated marginal effect from the industry analysis depends on the
technological stage of the country-industry pair and the global frontier. The
firm analysis results only partially confirm these positive findings – only
one-fifth of the firms benefit, while the least productive in the sample
experience a negative effect. The authors suggest that this difference between
firm and industry level may be due to the sample composition in terms of
countries or years included or to the entry-exit dynamics of the firms, in
which the least efficient firms will exit the market, what will increase
overall productivity of the industry. However, the empirical result explains only 16,5% of overall
variation, yet the most results are significant at 1% level. Lanoie et. al (2011) argue that their
study is the first to empirically detect the impact of all three channels of
Porter Hypothesis. As for its “strong” variation, the proxy for business
performance and environmental policy instruments – command-and-control
regulation, environmental related taxes were used. The database consists of
4,200 facilities in 7 OECD countries – USA, Canada, Japan, Germany, France,
Hungary, and Norway. Output presents that the negative impact of direct
stringency on productivity (-0.078) is detected. However,
indirect effect (through environmental R) is positive.

Jaraite and Maria (2012) studied the effect of environmental policy in terms of
“enhancing performance of the European Union’s  Emissions Trading Scheme” on efficiency and
productivity of power generation across EU states over the time period from
1996 till 2007. It also contains  as unwilling output and consists of emissions,
generated from public electricity, combined public heat, power generation and
public heat plants. Types of inputs such as labor, fuel inputs and net
installed electrical capacity are included to the estimation. The key result
shows that the price of emissions positively effects the efficiency, however,
there are no significant effect observable for productivity. Similar results
received Rubashkina et al. (2015), analysing policy stringency on the productivity
growth in 17 European countries from 1997 till 2009, with a focus on
manufacturing sectors by the instruments variable approach. Pollution abatement
and control expenditures (PACE) were used as a proxy for the environmental
regulation stringency, while total factor productivity (TFP) for the sectoral
economic performance. Productivity equations were estimated in both – level and
growth rates. The results showed any significant effect of policy stringency on
the factor productivity, across different specifications and regardless
controls used. Interestingly, the model outcome represents that higher R
investments do not contribute to the productivity of the certain country-sector,
even more – additional patents can even decline its productivity. However, Franco and Marin (2017) checked
for the environmental tax stringency on innovation and productivity for 13
manufacturing sectors with a panel of 8 European countries between 2001 and
2007 not only in within-sectors but also in upstream and downstream sectors.
The main results state that downstream
policy stringency is the most relevant to productivity and innovation growth, however,
within-sector regulations have positive the impact only on productivity.
Besides, upstream regulations are negatively correlated with productivity. The
possible reason might be that higher taxes imposed on downstream sectors forces
their connecting upstream sectors to innovate and generate new technologies
which boost the performance of downstreamers.

(2009) specified regulated
and unregulated production boundaries to determine relation between pollution
abatement expenditures and productivity changes across manufacturing sectors in
Germany, Japan, Netherlands and United States from 1987 through 2001 by
“assigned input” model. The evidence display that pollution abatement expenses
does not have significant negative influence on productivity growth. The survey by Rexhäuser
and Rammer (2014) attempts to track the effect for Germany. More
precisely, how is the profitability affected when innovations are induced by voluntary
applied environment regulation. The research is based on the collected firm-level
information on environmental innovation in Germany for different pollutants and
whether the innovation was induced by governmental environment regulation or
not. The findings state that innovations that do not enhance resource
efficiency do not positively influence the productivity and vice versa. This
effect applies for both types of innovations – regulation-induced and
voluntarily implemented, with a larger effect for regulative ones. However, the
paper states that results represent only the resource efficiency but not the total
efficiency (productivity), so according to authors this is the argument against
the Porter Hypothesis. Lundgren,
Marklund (2015) analysed how the firms environmental performance affect the
economic performance (measured as profit efficiency) in Swedish manufacturing
industry over the period from 1990 till 2001, have found that if environmental
performance is the result of the environmental policy then “it is not a
determinant for the profit efficiency”. And vice versa, when it is voluntarily
implemented, then it affects positively and significant, so the Porter
hypothesis is not supported. At the same time, Manello (2017) examines this aspect at international level, via
firm-level data of Italian and
German firms operating in the chemical sector during the period 2004–2007. For
the estimation DDF (difference-in-difference) framework is used to neutralize
the potential difference between economies in order to test the “win-win”
opportunities, means if company is subjected to more stringent environmental
policy, its investment in innovation able to lift up the productivity and
simultaneously cut emission quantities. The result states that the average
distance to the frontier decreased over the years. This demonstrates that
plants which are implementing the best technologies available, overcome those, adopting
technologies with less strict environmental requirements. Generally, the
distance between the industries in two countries started to decrease after
initial shock of the European Pollution Release and Transfer Register (E-PRTR)
established in 2001. The overall result, estimated by Sequential
Malmquist–Luenberger indexes (SML) confirmed a support to the “win-win”
opportunities for both Italian and German firms and also demonstrated significant
correlation between policy stringency and TFP (Total factor productivity)
growth indexes. Chatzistamoulou et al.
(2017) estimates the changes of productivity output in Greek manufacturing
industries between 1993 and 2006 after the implementation of the Kyoto’s
protocol, devoted to balance the operating expenditures to provide pollution
abatement initiatives. The study uses industry-level balanced panel with 4700
plant’s abatement expenditures (pollution abatement index – PAI, following Aiken, (2009)) as a proxy for the
policy stringency. The empirical outcome shows only insignificant result on the
productivity growth with a considerable variation across industries.

On the other hand, Managi et al.
(2005) question the relationship between environmental regulations,
technological innovation and productivity growth in the offshore oil and gas
industry through a unique micro-level data set from the Gulf of Mexico. Analysis was done through implementing
standard statistical causality tests to detect relationship between different
productivity indexes and regulations with DEA (Data Envelopment Analysis) to
measure changes in productivity from 1968 to 1998. Authors argue that
particular model have been chosen as it helps to decompose productivity part
and gives a possibility to measure dynamics of different components over time.
Output shows that despite increase in stringency of regulations, productivity
in the market enlarged considerably. Survey on productivity growth and
environmental regulation in Mexican and U.S. food manufacturing industry conducted by Alpay et. al (2002), implies dual profit model assuming that profit
changes can be enforced by technological development, price adjustment or
equilibrium attaining. The data series on profits, prices, capital stocks, and
environmental regulatory activity in Mexico and the United States were used for
the estimating model from 1971 to 1994 for Mexico and from 1962 to 1994 for the
US. The results report that regulatory inspections (as proxy for environmental
policy stringency) in Mexico have increased in average 2.8% of primal
productivity growth, however no clear impact of pollution abatement regulation
on manufacturing productivity in the US. Berman and Bui (2001) investigated
the effect of air quality regulation on productivity of oil refiners.
Interestingly, they used a direct measure of a local pollution regulator on the
most heavily regulated oil refiner in in the US – Air Basin (South Coast), Los
Angeles and compared to other US regions in the second step. The total
productivity was obtained as a sum of the data on physical amount from detailed
products and materials in the Census of Manufacturers. The fixed-effect model
used in order to allow for the heterogeneity across plants and also to allow
for regulation differences that influence an abatement. Inputs, which are
limited by the policy are quasi-fixed: pollution abatement capital and
abatement operating costs (which include costs of labor, materials and
services). Labor, material and capital are the chosen variables. The final
estimated results cover the time of sharp stringency in regulations between
1979 and 1992 and demonstrate that productivity in Air Basin considerably
increased. Even during the period of the most stringent regulation – from 1987
to 1992, refineries in South Coast still experienced growth, compared to
falling production in other US regions covered by less strict policies. At the same time, Greenstone
et al. (2012) conducted the research on how the air quality regulations
influence the manufacturing plants productivity (TFP) levels in US. Using
detailed production data of around 1.2 observations from Annual Survey of
Manufacturers from 1973 till 1993, found that stringency in the policy leads to
the around 2.6 percent decline in TFP among surviving plants. Lanoie et al. (2008) in their empirical
analysis, using GLS model to track the impact of environmental regulation
stringency on the total factor productivity (TFP) of Quebeck manufacturing
sector, stated and tested three types of assumption: 1. Dynamic assumption of
the Porter hypothesis through the lagged variables; 2. Authors consider that
effect is more observable in the more polluting industries; 3. Impact is
greater in internationally competitive sectors. When all sample is used, results
support the hypothesis that more stringent environmental policy leads to
positive outcome for productivity, however only in dynamic case. This result is
greater for the industries that are more involved in international competition.
With regard to more polluting industries, opposite have been revealed.

Empirical research by Hamamoto (2006) devoted to study,
whether the stringency of environmental policy in five manufacturing industries
in Japan have an effect on R activity and therefore is responsible for
the productivity growth in 1960s and 1970s. The empirical result implies, that
relationship between pollution control and investment on innovations, using data
from 1996 to 1976, are significant and positive. Afterwards, findings
demonstrate that pollution control expenditures decrease average age of capital
stock and have a positive impact on modernization at 10% significance level.
Further, it is found that increase in R investment, induced by increased
policy regulations contribute to the total factor productivity growth in the
stated period. Similar results were received by Yang
et al. (2012) which examine
Taiwanese manufacturing industries between 1997 and 2003 to find if
environmental policies (pollution abatement costs as a proxy) induce R
and productivity. The findings show that stricter environmental protection have
positive correlation with R what in turn has a conclusive influence on
productivity growth.

Referring to the industries, Rassier and Earnhart (2010) test the
“strong” version of the Porter hypothesis by analysing the impact of the waste water
policy, measured by “waste water discharge limits” on the profits of the firms
(proxied by the return of scales) operating in the chemical manufacturing
industry. Authors used a panel data analysis with a sample of quarterly data,
which consists of 926 observations, including 59 chemical manufacturing firms and annual data with 337
observations, consisting of 73 firms over ten years (1993-2003). The model
includes several controls: sales
growth, capital intensity, age
of assets, size, market share, industry concentration. Empirical results demonstrate
that stringent clean water regulation serves as a negative factor for the firms
profitability, showing with 90% confidence, that 10% increase in tightening
discharge limit, declines return on sales by 1.7%. Nearly the same
results receive Gray and Shadbegian
(2003), assuming that higher pollution abatement costs have
a negative influence on productivity, analysing 116 pulp and paper mills
between 1979 to 1990, with a substantial difference in integrated and
non-integrated ones.

However, Sadeghzadeh (2014),
developed a model to look over not only on productivity, but also on competitiveness.
He suggests, that environmental regulation contributes to the productivity
growth by reallocating input from less productive to more productive firms,
making former to leave the market. Thus, market becomes more productive
compared to that it was before, however less competitive. This enables
remaining firms to increase the prices what in turn harms welfare. The main
findings also indicate that tighter environmental regulations deliver strong
incentives to adopt cleaner abatement technologies, thus more stringent policy
leads to increases in average productivity and environmental quality or a
“win-win” situation. This conclusion is partly supported by Xepapadeas and de Zeeuw (1999) results, which indicate that
stringency in the environmental rules will not provide a “win-win” situation in
sense of reducing emissions and increasing profitability, but productivity is
supposed to increase due to induced modernization of the capital stock by the
stricter policy.

One possible reason
for such a mixed evidence in the literature might be, according to Ambec et al.(2013), that the prevailing
number of previous studies have reckoned only the static dimension, where
technology, processes, products and consumer preferences are fixed. While the
actual dynamic competition reflects the reality with changing technological opportunities
combined with incomplete information and complications in adjusting individual,
group or corporate incentives. This means, if policies trigger innovations,
which in turn reduce inefficiency, costs and stimulate technology development
and therefore growth, they simply need time to occur by adapting optimally to
the new regulations. However, the most number of works regress proxied
stringency of regulation at time 0 on productivity at the same time 0, eventually
it says nothing. For instance, the studies which used the lags of three or four
years between regulation stringency and changes in productivity allows for
dynamic effect, as in Lanoie et al. (2008)
or Managi et al. (2005) lead to the
positive outcome.

The second misunderstanding
that not all policies, but only well-designed ones (eg. stringency,
flexibility, predictability and competition friendliness) can contribute to
productivity growth (Ko?luk and
Zipperer, 2014. Ambec et al. (2013)).
Besides, market-based and
flexible instruments like tradable allowances or performance standards are more
favourable for innovation than technological ones as they give more variation
to find the best suited technological solutions to minimise the expenditures
due to compliance. For example, well-defined property rights for innovations
and R activities can benefit innovating firms and slow down diffusion.
This view is also supported by
Sadegzadeh (2014), stating that
If the encouraged technological change is a principal source of productivity enforcement,
then the environmental regulations lead to explained by Porter, Pareto
improvement or a “win-win” situation by not only protecting the environment,
but also stimulating aggregate competitiveness and productivity. 

p.p1 of the fact that I was laying

p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Helvetica; -webkit-text-stroke: #000000}
p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Helvetica; -webkit-text-stroke: #000000; min-height: 13.0px}
span.s1 {font-kerning: none}
span.Apple-tab-span {white-space:pre}

I don’t remember how I got here; the first thing I remember was the feel of cold metal underneath me, the cold smooth hardness was unnerving. It was dark, but I could make out the four walls of the small room surrounding me. I looked around and saw the four walls were completely made of smooth shiny metal. 

*    *    *    *    *    *     *    *   *   *   *    *    *    *    *    *    *    *    *    *    *    *    *    *    *    *     *    *   * 

I woke up without remembering going to sleep. I pushed my hair out of my face but when I looked down at it, my hair was so black it was almost to the shade of dark blue in the dim yellow glow from a small light in the corner of the room. Sitting up and looking around the room I could see a small bed made out of metal covered in a mattress with a single blanket and pillow resting on top. I stood up and noticed blue jeans and a red t-shirt clothed me. I looked down at my shirt and realized it had a big “407” printed in big black font on the front. I remember a strange dark blue vapor fill the room and I remember the sound of my body hitting the floor with a thud.

*    *    *    *    *    *     *    *   *   *   *    *    *    *    *    *    *    *    *    *    *    *    *    *    *    *     *    *   * 

A sound woke me, it was footsteps. I became aware of the fact that I was laying on the bed, even though I did not fall asleep here.I opened my eyes and I saw a small opening in the corner about four feet off the ground. I walked over to the small opening. I could not see anything for it was too dark. As my eyes adjusted to the lack of light I could make out a set of light blue eyes looking at me with deep concern. A light flicked on in the corner of the room and filled the small room with a dull yellow light. I looked at the yellow light and wondered where it came from. I looked back at the eyes and the warm concern in the light blue eyes melted into fear. The small opening slammed shut. I ran over to the wall and pushed on the spot where the opening had vanished. I screamed, I banged on that door, I did everything I could think of. I rammed the bed into the wall, I rammed my self into the wall. I did this for what seemed like hours until I was too tired to even stand up straight. The same strange light blue vapor filled the air and I hit the ground with a thud. 

*    *    *    *    *    *     *    *   *   *   *    *    *    *    *    *    *    *    *    *    *    *    *    *    *    *     *    *   * 

I woke up on the floor this time. My bed was gone and so were any traces of the window. I didn’t know what to do so I sat up and stayed there. I have no idea how much time had passed, but I know it had been at least a couple of hours. 

I heard a sliding sound come from behind me from the same wall where I saw the light blue eyes from the opening. I turned around and saw a door-sized opening in the wall next to where I saw the small window like opening before. I stood up slowly, fearing it would slide shut if I got too close. I stepped through the door into a narrow hallway. Every 15 feet there was a small opening with a little latch to pull open the opening. I walked to the first opening. I slide the small opening and I could make out a small bed with a person-sized lump on it. I looked around the room and I thought it looked a lot like my room. My thoughts snapped back to the small lump when it shifted around on the bed. It got up, looked in my direction and started towards me. I could make out nothing around this new person expect that it was small around my height. A dull yellow light flickered on in the corner of the room. As my eyes adjusted to the new light, the figure turned to face the light. I could tell that the figure was not only my height, but my same body build, my same hair, and she was wearing the same jeans and t-shirt I had on, but her t-shirt had the number 408 printed on it. I thought about how she looked a lot like me and I began to feel bad for her since she to was trapped in a small room just like me. She turned towards me and I stopped breathing. Her light blue eyes pierced my gaze as I looked at her couldn’t turn away. I slammed the small opening close and started running frantically down the hallway. The hallway came to an end with an elevator. I stepped inside carefully. I pressed the upwards button and when the elevator reached the top I stepped out of the elevator and into a large office. At the other end of the room, there was a desk and an empty chair in front a glass wall overlooking thousands of square rooms just like mine. I walked over to the glass wall and saw each room was filled with: one dull yellow light, one small metal bed, and one me. I came aware of a presence with me in the room. I looked over my shoulder, into a pair of light blue eyes. “It magnificent, isn’t it?” she asked with an eerie smile painted on her lips. 

The a supine position. Each measurement of central

The central
venous pressure (CVP) is the pressure measured in the central veins located
near the heart. It designates right mean atrial pressure and is recurrently
used as a guesstimate of right ventricular preload. “The central venous
pressure does not measure blood volume truthfully, although it is often used to
estimate it. The central venous pressure value is determined by the pressure of
venous blood in the vena cava and by the function of the right heart, and it is
not only affected by intravascular volume and venous return, but also by venous
tone and intrathoracic pressure, along with right heart function and myocardial
compliance” (Department of Anaesthesiology The
University of Hong Kong, n.d.).

and over-distention of the venous collecting system can be recognized by
central venous pressure measurements before clinical signs and symptoms become evident.
Under normal occurrences an increased venous return results in an augmented
cardiac output, without significant changes in central venous pressure.
However, with poor right ventricular function, or an obstructed pulmonary
circulation, the right atrial pressure rises, therefore causing a subsequent
rise in measures central venous pressure. Comparably, it is possible for a
patient with hypovolemia to exhibit a central venous pressure reading in the
normal range due to loss of blood volume or widespread vasodilation which will
result in reduced venous return and a fall in right atrial pressure and central
venous pressure” (Richard E. Klabunde, 2014).

“The central venous pressure can be
measured either manually using a manometer or electronically using a
transducer. In either case the central venous pressured must be zeroed at the
level of the right atrium. This is usually the level of the fourth intercostal
space in the mid-axillary line while the patient is lying in a supine position.
Each measurement of central venous pressure should be taken at this same zero
position. Trends in the sequential measurement of central venous pressure are
much more informative than single readings. However, if the central venous
pressure is measured at a different level each time then this renders the trend
in measurement inaccurate” (Department
of Anaesthesiology The University of Hong Kong, n.d.).

Pulmonary Artery Pressure.

Pulmonary artery pressure (PA pressure) is a measurement of blood pressure found in the pulmonary artery of the heart. “Pulmonary artery pressure is generated by
the right ventricle expelling blood into the pulmonary circulation, which acts
as an opposition to the production from the right ventricle. With each ejection
of blood during ventricular systole, the pulmonary arterial blood volume
increases, which stretches the wall of the artery. As the heart relaxes also
known as ventricular diastole, blood continues to flow from the pulmonary
artery into the pulmonary circulation. The smaller arteries and arterioles
serve as the chief resistance vessels, and through changes in their diameter,
regulate pulmonary vascular resistance” (Richard E. Klabunde, The
Pharmacologic Treatment of Pulmonary Hypertension, 2010).

Today, pulmonary
artery catheters are placed on a case by case basis taking into consideration
the patient’s condition and the staff qualification. Conditions for using a pulmonary
catheter include severe cardiogenic pulmonary edema, patients with acute
respiratory distress syndrome who are not hemodynamically stable, patients who
have had major thoracic surgery, and patients with septic or sever cardiogenic
shock.  “A pulmonary artery
pressure monitoring system uses a sensor to measure your pulmonary artery
pressure and heart rate. The sensor is small, comparable to the size of a penny
with two thin loops at each of the ends. The sensor is implanted in the
pulmonary artery of the heart. Normally you will not feel the sensor, and it
will not impede with your day to day activities or other devices that may be
implanted such as a pacemaker or defibrillator” (Abbott, n.d.).

Pulmonary Capillary Wedge Pressure.

Pulmonary capillary wedge pressure provides an estimation of left atrial
pressure. Left atrial pressure can be measured by insertion of a catheter into
the right atrium then piercing through the interatrial septum, however, for apparent
reasons, this is not usually accomplished because of the destruction to the
septum and potential detriment to the patient. “It is helpful to measure pulmonary capillary wedge pressure to
diagnose the severity of left ventricular
failure and to calculate the
degree of mitral valve stenosis. Both
conditions raise left atrial pressure and therefore raise pulmonary capillary
wedge pressure.
Aortic valve stenosis and regurgitation, and mitral
regurgitation also elevate left
atrial pressure. When these pressures are above twenty millimeters of mercury,
pulmonary edema is probable to transpire, which is life threatening to the


On gamma radiation and beta particles, which can

On March 11,
2011, an earthquake led to a major catastrophe at the Fukushima Daiichi Nuclear
Power Plant in Okuma Japan. The earthquake triggered a tsunami over 14 meters
tall, which disabled electricity from the power plant. Efforts to return power
to the facility were forestalled by floods that washed away fuel tanks for
emergency diesel generators. Despite strenuous efforts, the emergency cooling
systems could not be operated. This inevitably led to a meltdown, in which hydrogen
explosions heavily damaged the facility, and released large amounts of
radioactive material into the environment. (World Nuclear Orgaization, October
2017) This artificial calamity has been dubbed the Fukushima disaster.  

In all
nuclear power plants, the atoms of a highly radioactive source, (such as
uranium) release energy in a process called radioactive decay. This causes the
original source to break down, transforming the original atoms into isotopes of
another element. It is common for the subsequent isotopes to also undergo
radioactive decay, consequently releasing more radiation into the environment. Following
the Fukushima disaster, several radioactive isotopes were released into the
atmosphere and the Pacific Ocean. The most common isotopes released by the
Fukushima disaster were iodine, cesium and xenon. (Ministry of Foreign affairs,
2013). These isotopes are all natural byproducts of the fission reactions within
nuclear power plants.

Following the
Fukushima disaster, one of the largest environmental concerns was the discharge
of harmful cesium. (Ministry of Foreign affairs, 2011)  The element Cesium has 55 protons and an
atomic mass of about 133 amu.  Within the Fukushima reactor, an isotope of
cesium is produced called Cs-137. The University of Sheffield (1993)
describes Cs-137 as having a half-life of 30 years. As
it decays, the isotope releases gamma radiation and beta particles, which can pass
through organic tissue, causing DNA and cellular damage. The most significant threat
to the environment was the possibility of Cs-137 entering the atmosphere
and falling into the Pacific Ocean. Its enormous half-life presents a danger to
the environment which can last for decades. (Environmental Impact Assessment,
2015) Fortunately, due to the direction of the wind immediately following the
disaster, most of the Cs-137 remained within Japan. At the same time,
however, this meant that regions within twenty miles of the disaster would remain,
even today, uninhabitable due to soil contamination. (Dispersion of Radioactive
Material from the Fukushima, 2013) According to the World Nuclear Organization
(2017), small amounts of atmospheric Cs-137 had traveled as far west as off
the coast of California, but the levels detected were not considered at all
hazardous to the populace.

I-131, a radioactive isotope of Iodine,
was also released into the environment. The element Iodine is water-soluble,
having 53 protons and an atomic mass of about 127 amu. According to the University of Sheffield (1993), I-131 has a half-life of about
eight days and it also emits beta particles. Having such a short half-life
means its long-term impact on the environment is relatively insignificant.

Another radioactive isotope released
into the environment was Xe-133. The element Xenon is a noble gas with 54 protons
and an atomic weight of just over 131 amu. Being a noble gas, Xenon does not
react well with other elements, and is therefore not easily introduced into the
chemical compounds within organic tissue. According to the University of Sheffield (1993), Xe-133 has a half-life of only 5
days. Like I-131 and Cs-137, Xe-133 emits beta particles which can
cause cellular damage, but its short half-life and position of being a noble
gas limits the possibility of causing severe environmental issues.  

Interestingly, there have been zero
deaths and very little radiation-related injuries due to the Fukushima
disaster. Even people associated with the operation of the plant and the first
responders to the disaster have reported to have only minor radiation exposure.
This stands in sharp contrast to the Chernobyl accident, in which many people
had suffered injuries caused by radiation. (Society of Environmental
Toxicology and Chemistry, 2016)

Overall, the radioactivity levels measured
in the aquatic ecosystems near Fukushima were shown to be much lower than what
was initially predicted by studies immediately following the disaster. Exposures
were found to be too low to cause seriously ill-effects in the populations of
various marine organisms. (Off-site Decontamination, Events and highlights,
August, 2017) It is generally agreed that the quick radioactive decay of Xe-133
and iodine-131, as well as the fallout being mostly confined to Japan, were the
greatest contributing factors to the low levels of exposure measured in the
environment. Although there have been some cases where individual fish were
found to have worrying levels of radiation, it has not been in observed to be
on a scale of entire populations, which was the initial fear. (5 years after
Fukushima, 2017)

On land, studies on the plants and
animals in the forests near Fukushima have noted a range of “physiological,
developmental, morphological, and behavioral consequences” due to radiation
exposure. (Society of Environmental Toxicology and Chemistry, 2016) Although some minor effects have
been observed in the populations with the greatest exposure to the disaster (such
as monkeys, insects, birds, and trees), samples from the marine environment have
ascertained that radiation exposure to marine birds and macro-algae are far
below the levels initially expected. This is very fortunate, as these marine
organisms have a far more important role in the overall environment, compared
to the terrestrial counterparts. (Impact of the Fukushima accident on marine
life, 2016) Again, in comparison to the accident at Chernobyl, the environmental
impact of the Fukushima disaster seems to be far less severe.

The Fukushima accident in Japan has set
back the public perception on the safety of nuclear energy, and has caused a
noticeable recession in the construction of nuclear power plants. (5 years
after Fukushima, 2017) This stands against the facts that were zero deaths and
no serious exposures of radiation attributed to the disaster.  Despite the fact that the incident caused no
casualties, many members of the public still fear the radiation levels in
Fukushima, even though the levels are considered lower than natural background
levels found in other parts of the world. (World Nuclear Orgaization, October

Data shows that the nuclear power
plants worldwide produced 2346 TWh in 2012. The incident at Fukushima caused
the biggest one-year decline of Nuclear power in 2012; the generation of
electricity was 7% less than the amount in 2011 (Dr. Jim Green, 2013, par 2).
The effect of the incident caused a full year of suspended operation in Japan,
48 reactors were closed, and the retraction of eight building projects in
Germany. After the incident, the Japanese closed nearly fifty operable. This
was a huge loss in the production of electricity and resulted in the loss of tens
of billions of dollars. Globally, it was the lowest nuclear generation since
1999. Generation of electricity by nuclear power fell in over a dozen
countries, including all of the top five nuclear-generating countries. (Lessons
From Fukushima, 2012)

It is
very unfortunate that the Fukushima disaster occurred. Even more unfortunate
was the world’s hysterical response to it. Many agree, myself included, that
there should be a reduction in the use of fossil fuels; and that they should be
replaced by low-emission sources of energy. The byproduct of nuclear energy is
harmless steam, and a very small amount of radioactive waste which can be
safely stored away. Nuclear energy is the only large-scale substitute for
fossil fuels, and is readily available for creating a continuous and reliable
supply of electricity. 

Old to act like a toddler and hence

Old age is usually the end of life cycle of people
and their health may deteriorate physically and mentally and this varies
culturally and historically. Each person’s aging process differs from
individual to individual depending on their food habits, lifestyle and various
such concerns. Older people are people full of life those are purely undergoing
senescence, the universal and inevitable changes that all of us experience from
the time we are born.

                   The Old age is an integral
part of human life. It is the evening of life. It is unavoidable, unwelcomed
and problem ridden phase of life. It is interesting that everyone wants to live
a long life but not to be old. Old age complete the life pattern. It has its
own pleasures though different from the pleasures of youth. Older people have a
wealth of experience so that it would be more helpful to every other people.
Older people even help to gain knowledge and wisdom And also people will get to
learn morals, principles and value from them. They also teach people to to love ,  to
care ,to give, to forgive, to accept, to support and face life. Older people
can be irritable because of worries and unwanted tension.

 People undergo so many age
related problems health wise and vision loss among the elderly is a major
health care problem. Some may lose hearing ability and the functioning of
organs decreases gradually. The metabolism of the aged people is very less and
hence they cannot have normal food . 

Old people are like infants its almost like people’s lives start to
reverse when they start aging , they tend to act like a toddler and hence
people sometimes cry , whine , scream and yell and they complain and throw a
tantrum because they have mood swings very often.

Aging is a natural process, and longevity is the desirable and natural
aim of any society .Aging is also a risk factor for declines in function and
health .Older people have been stereotyped for example, as burdens on society
by virtue of their numbers and being dependant on family members. Two hundred
years ago someone aged 40 seemed ‘old’, today they would be considered ‘young’.
Old Age was defined  as in the present as
beginning somewhere around the ages of 60 to 65.The scientific study of old
age, the aging process and the particular problems of old people is called
gerontology. Conventionally, gerontologists and demographers choose 60 or 65 as
the lower limit of old age for good reasons. Sixty or 65 are the ages at which state
or private pensions are most frequently paid in many countries.

                    Ageism is something
undesirable and one manifestation is the widespread assumption that older people have simple needs such as for food , warmth ,
cleanliness and safety whereas the reality is that older people need effective
and skilful practice. In the society everyone must prefer using the term ‘older
people’ rather than ‘the elderly’, which stigmatises the 50 years of life which
can be experienced under this term. In Older age , people may be experiencing a
continuity of poverty, which could be worsened by ageing For example, as a
result of widowhood or the experience of illness. 

Goldschmidt customers to take an effective way to

 owner of the investment agency named  FORINTAS & GOLDSCHMIDT SINGAPORE PTE. LTD
 , which is located in several nations
around the world. FORINTAS & GOLDSCHMIDT SINGAPORE PTE. LTD  the main are which they focused is on real
estate investment. Along with the real estate investment they also focused on
cash and currency.

We have an birilliant administration where our help
desk administration has robo counselors, which doesn’t delay any sort of
information that is enquired by the customer. Forintas  Goldschmidt has adivosry experts who
will advice the customary about the risk assessment for their investment.

Forintas & Goldschmidt 
announced that their main aim is to provide profitiable returns to their
Forintas & Goldschmidt delas with the  genuine people who has earn profitable

We provide prompt your decisions through introduction
,quality ,protection and evaluation of the risk. We have a Sales procedure
which has proven record of very good returns and we follow  that to make our customers to get profitable

Forintas  Goldschmidt  venture advice:

Our adiviosry
experts  including money  related organizers, specialists and
financiers  will provide an extensive and
safe advices to provide secured returns. We have an impeccable record where the
returns would be only in fanstay, but we made that them to reailty with our
advisiory team.

Forintas  Goldschmidt  Expert  

Forintas  Goldschmidt  has an effective team which will effectively
manage your money. Our finance management strategy shows a path where our customers
can be in safe path. We make our customers to see there bank balance increases
even though if market falls down.

We make our customers
to take an effective way to short an investment. While people loosing money
when the market goes down and if you make money at that time, that confidence
will provide to you.

We track all the stock
records everyday and we won’t miss any stock that will be profitable for
our  customers. Our team will get enhanced
with the current changes in the market and current changes in the FORINTAS

FORINTAS & GOLDSCHMIDT has made an huge
impact on bank management. People will find it difficult to get big amount of
loan for an high end property, we will help you to get good amount of loan for your

Our top Approaches To
Contribute Little Measures Of Cash:

We help you in Bank
Investments,so that returns will be good.

we look for the Betterment
of your investments,

we help you in Lending
Club, Motif, Pay Down Debt.

We show you samples
how Employer Matched Retirement is useful and helpful.

We make a better Your  Retirement Plan for your future and Prosperous

We explain and show
demo on US Treasury Securities.

We help you to enhance
Your Own Skills.

We explain you how to
do Dividend Reinvestment Plans (DRIPS).

We have expert team in
Mutual Funds and ETFs,Loyal3.

We give appropriate information
about Online Brokerage Firms.

We help you to have Your
Own Business