Saturday, August 31, 2019

Model Stock Research for the Time-Warner Company Essay

Macroeconomic Review Being one of the fastest-paced and highest-profile industries in the world, the media sector has been in a whirlwind of change this past decade. There has been an explosive boom and bust and, of late, boom again, of internet technology. This has dramatically influenced media delivery. Clampdowns on shady accounting practices, assets changing hands and a more discerning and demanding media audience have also ensured that changes in the industry occurred at break-neck speed. This is why global media giant, Time Warner, has sought to embrace these challenges of the Information Age. Indeed, Time Warner had uniquely positioned itself to benefit from the explosive changes. Their size and resources make them a formidable competitor in the media arena because of their efficiency in an increasingly global environment. In front of the media arena, the average US citizen is confronted by more than 1,500 dailies, over 5,700 weekly newspapers, some 17,000 magazine titles, 10,000 commercial radio stations and more than 1,600 TV stations. Nielsen Media Research reported that as of January 2003, 98.2% of the over 100 million households own at least one TV set, with 69.8% of them hooked up to cable. The US also exports a massive amount of its media, which has become almost staple fare around the world. CNBC alone boasts a reach of 192 million households worldwide, with 82m of them in the US and Canada. The latest available GDP statistics from the US Bureau of Economic Analysis show that the radio and TV industry contributed $72.9 billion to the US GDP in 2001, up from $71.1 billion in 2000. Total US GDP for 2001 was $10,082 billion. In 2006, the US GDP is estimated at 3.2%, while the interest rates are at 8% (See Table 1). Table 1. United States – Country Data and Market Indicators (EIU, 2006). Series Units 2001 2002 2003 2004 2005 2006 Gross Domestic Product Key indicators GDP (% real change pa) 0.8 1.6 2.5 3.9 3.2 3.2 Fiscal and monetary indicators Interest rates Lending interest rate (%) 6.9 4.7 4.1 4.3 6.2 8.0 Inflation and wages Consumer prices (% change pa; av) 2.8 1.6 2.3 2.7 3.4 3.3 Demographics and income Population M 285.1 288.0 290.8 293.6 296.4 299.7 GDP per head ($ at PPP) PPP 35524.2 36352.8 37691.7 39894.3 42023.7 44110.0 Population Population M 285.1 288.0 290.8 293.6 296.4 299.7 Population (% change pa) 1.0 1.0 1.0 1.0 1.0 1.1 Labour force M 143.8 144.9 146.5 147.4 149.3 151.4 Recorded unemployment (%) 4.7 5.8 6.0 5.5 5.1 4.6 Income GDP per head US$ 35524.2 36352.8 37691.7 39894.3 42023.7 44110.0 Private consumption per head US$ 24745.9 25523.4 26491.1 27969.4 29495.1 30960.0 GDP per head ($ at PPP) PPP 35524.2 36352.8 37691.7 39894.3 42023.7 44110.0 Real GDP growth per head (% pa) -0.2 0.6 1.5 2.9 2.2 2.1 Personal disposable income bn  LCU 7486.8 7830.1 8162.5 8681.6 9036.1 9580.2 Personal disposable income (US$) M  US$ 7486840.0 7830080.0 8162530.0 8681560.0 9036100.0 9580150.0 Real personal disposable income (US$ at 1996 prices) M  US$ 6860090.0 7074210.0 7231140.0 7493920.0 7581650.0 7811600.0 Real personal disposable income (% change pa) 1.9 3.1 2.2 3.6 1.2 3.0 Average real wage index (LCU, 1996=100) 107.3 108.9 109.4 108.9 108.1 108.6 Average real wages (% change pa) 1.0 1.5 0.4 -0.5 -0.7 0.5   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   Fact remains that US is the world’s biggest media producer as well as consumer. Advertising is the main source of revenue, although some sectors also create revenues from subscriptions. Media concerns with entertainment arms have additional sources of income through takings from gaming, distribution rights, amusement park entrance fees and spin-off merchandise. Also, entertainment is one of America’s top exports. In 1999, in fact, film, television, music, radio, advertising, print publishing, and computer software together were the top export, almost $80 billion worth, and while software alone accounted for $50 billion of the total, some of that category also qualifies as entertainment—video games and pornography, for example. Hardly anyone is exempt from the force of American images and sounds. . . . American popular culture is the nemesis that hundreds of millions—perhaps billions—of people love, and love to hate. The antagonism and the dependency are inseparable, for the media ï ¬â€šood—essentially American in its origin, but virtually unlimited in its reach—represents, like it or not, a common imagination. However, media availability is somewhat disproportionate to the time an average American has to consume information. But the industry is a lucrative one and media spinners are finding new ways to make the public continue to consume media and pay for it. In 2001, companies in the media industry recorded total revenues of $261.7 billion. Although this was a growth over 2000’s $255.2 billion, operating income had been steadily falling since 1998. This can be attributed largely to the fact that cable and satellite providers experienced rising maintenance costs and were investing heavily in new technology. The decline in income is expected to ease over the next few years as investments on new delivery channels start to bear fruit. Because of this, many companies started holding back on advertising activities following the recession in 2001. The 9⠁„11 tragedy, the subsequent wars in Afghanistan and Iraq and their accelerator effect on the economic downturn, brought increased uncertainties to the stock markets and exacerbated the advertising slowdown. This downtrend was reversed throughout most of 2002 as many believed a swift end to the Iraqi invasion would emerge. Market jitters returned in the third quarter of 2002 and earlier in 2003, which somewhat stalled advertising expansion as the Iraqi situation refused to look as good as the President’s claims. However, tentative cheers on US trading floors and moderate improvements in the job market slowly built up advertising momentum in the third quarter of 2003. For example, Time-Warner’s strategy has insisted on managing their costs aggressively. In 2005, they undertook difficult, but necessary, restructurings at a number of our divisions to ensure that their costs are aligned with their long-term business needs. At Warner Bros., for example, they streamlined their management to create a single Home Entertainment Group to oversee the digital delivery of entertainment to consumers. Looking ahead, they plan to reduce costs by $1 billion across their businesses in 2006 and 2007. Trade disputes with the EU and China and persistent trouble for US interests in Europe and the Middle East are forming grey clouds over the economic horizon. Also, big budget media advertising, with the exception of outdoor advertising, is invariably sparse at year-end when readership and viewership are traditionally down due to a lack of new programs and major sporting events. Consumers are usually on vacation or out holiday shopping at year-end, rather than at home reading or sitting in front of ‘the box’, giving media advertising less reach. However, prudent companies are aware that a prolonged advertising drought can adversely affect brand recall and consequently spell slower product movement. Thus, although advertising revenue increases were more modest than expected in 2003, with the exception of cable television, syndication and Spanish network segments. This income source is predicted to grow in 2004. As Time Warner moves forward with these external challenges, the foundation of their strategy is to invest our financial resources in a disciplined manner to provide the best possible return to their shareholders. This means focusing on the right businesses. Their board of directors and management continuously evaluate Time-Warner’s businesses to ensure that they meet their standards for financial performance, growth and return on investment. Industry Overview The United States market for cable and satellite TV services has grown by 6.5% since 2003 to reach a value of US$57.6 billion in 2004. Over 2001 to 2005, value sales increased by 36.5%. Over 73 million American households subscribed to cable television services with 34% of them having digital service in 2004. In 2004, the average monthly price for expanded basic programming packages was US$38.23. Satellite TV services are expected to continue to increase in popularity. Satellite TV is offering aggressive pricing packages relative to cable, an increasing number of special interest channels and local channels in all markets. Local channels were previously unavailable to subscribers. Despite the spate of satellite TV, Time Warner’s networks and cable segments have been posting consistent revenue growth in recent years. Revenue from the networks segment increased from $8,434 million in 2003 to $9,611 million in 2005, representing a growth rate of 7%. Revenue from the cable division increased from $7,699 million in 2003 to $9,498 million in 2005, representing a growth rate of 11%. These two segments together contribute more than 42% of the total revenues of the company. Increasing segmental revenues have contributed in the company’s overall revenue growth of 3.7% in fiscal 2005 over fiscal 2004. This is why cable television will likely continue to generate healthy revenue growth for owners of those networks, though gains may well be slower than over the past several years. Beneficiaries of ongoing strength in cable include Viacom, Time Warner, News Corp. and Disney. The most significant changes in the media industry in the past decade have been in its adoption of the internet technology. The internet has evolved from being just a communications tool to becoming an important entertainment, business and marketplace platform. Catching up is the cable segment, which is embracing broadband technology in earnest and is rapidly overtaking the role of traditional dialup technology in supplying telephony and especially internet services to North American homes. From 1996 through 2003, the US cable industry spent $75 billion in private capital on plant and equipment as well as infrastructure upgrades, according to NCTA. The cable industry in its totality is moving from analog to digital technology to compete with the high-quality, low-interruption signal transmission broadcast by DBS companies, which have been offering high quality, encrypted digital transmission almost since day one. The competition between cable and satTV is becoming more intense. Apart from normal TV programs and movie line-ups, both offer interactive (cable TV being a recent entrant) and internet technologies on their systems. Both are taking the TV experience to new heights. Not only can the viewer play interactive games on TV but they can also interact with programs they are watching, for example responding to interactive surveys or making immediate purchases on shopping channels via the remote control. Latest technological advancements also allow viewers to record, pause, forward and reverse live programs or watch them in slow motion or instant replay using digital⠁„personal video recording (DVR or PVR) and video on demand (VoD) devices for satTV and cable TV, respectively. Unfortunately, the digital revolution is bringing problems to some in the industry. Content and program providers are anxious over the dent DVRs and VODs may make in their earnings. How serious their concerns are remain to be seen, but observers of the industry are noting that a predecessor of DVRs and VODs, the VCR, was greeted with the same disquietude, which was soon replaced with blithe indifference as the technology propagated a new earning capacity, that is, the sale of videos. An issue that bothers media executives is their loss of control over viewers. Viewers can replay scenes they like during a commercial break, thus effectively bypassing messages from advertisers, who happen to be program sponsors. This could force advertisers to see TV as a less effective advertising channel than it used to be and give them better leverage at commercial slot price negotiation or cause them to adopt other advertising media. As viewers become more discerning, they are demanding greater viewing variety and higher quality programs. They are also getting hi-tech, seeking a greater, more interactive TV viewing experience much as they have come to expect from their personal computers. The FCC, the federal regulator for the media and telecommunications industry, is aware of this and is pushing the industry to hurry the digital transition. The FCC has mandated that all TV broadcast stations have High Definition TV (HDTV) broadcasting capability by 2006. This will mean a bigger outlay for broadcasters and cable companies in the coming few years: Broadcasters and program networks will have to invest in new cameras, titling and editing equipment and tape machines that support the digital TV (DTV) format and revamped rigs for DTV friendly TV vans. Cable operators need to convert all their equipment and set-top boxes. However, for viewers with HDTVs, the set-top boxes are bypassed. Time Warner had responded to this challenge through Warner Bros Entertainment, a subsidiary of the company when it tied up with CBS Corporation to form a new broadcast network. This new network, The CW, to be launched in late 2006, can significantly expand Time Warner’s customer base. Time Warner’s Cartoon Network channel entered into a joint venture with VIZ Media to form Toonami Jetstream, a new broadband service to provide streaming episodes of animation series. Toonami Jetstream will allow users to view episodes of Cartoon Network in their own time and also provide an alternative distribution vehicle for Time Warner. These alliances and joint ventures can provide Time Warner with a competitive advantage over its peers and enable it to enhance its revenue position. Expanding broadband market Most players in the cable industry have begun the digital journey but consumers may still need to dig into their pockets to enjoy the digital experience and make the analog age a thing of the past. They have to either buy new set-top boxes, which convert digital signals to analog, or buy HDTV sets, which range between just under $1,000 to almost $10,000. Early in June 2003, when the FCC eased its decades-old restrictions on the size of media entities, controversy erupted. Large media companies hailed the move. Consumer groups condemned the decision as bad news for democracy and local content. The new rule, which allowed media companies to have US penetration cap of 45% instead of the old 35%, was good news to media giants who were operating at close to the 35% limit. They had been lobbying hard for the lift, including Viacom, whose $40.6 billion purchase of CBS makes it the US’ largest single operator of TV and radio stations, reaching 41% of the total national broadcasting market. The 45% rule looked set to open the floodgates for other media liberalization that would allow TV, radio and newspaper owners much more room for consolidation. If a large TV station acquired a small, one-paper town market, the community would be dominated by that entity. This would threaten local content in the community’s media. However, the 45% rule was blocked by Congress in a massive 400 to 21 vote in July 2003. This was followed by a stay order by a federal court some few weeks later. Should the FCC fail to appeal to have the new cap reinstated, media giants who have exceeded the old limit will have to shave off their access assets and those nearing the demarcation point will need to strike out expansion as a way to increase income. Time Warner, which garners some revenues from films, should grow its studio profits well. It is releasing several DVDs of popular titles. Film profits generally sway on the timing of releases. Viacom, Disney, and Dreamworks Animation also have large stakes in the sector, which will likely move further towards home viewing via digital cable and the Internet. Having many cities that are highly cosmopolitan, the US has various minority and ethnic groups which are looking for more than just generic programs that do not necessarily depict their lifestyles or cater to their tastes. Many minority group communities have been addressing these issues by producing their own newspapers, TV programs and radio broadcasts. As their respective populations grow, so has the amount of business of their specialty media. Having long observed the growth of these niche markets, bigger players are now making moves toward grabbing a slice of the ethnic specific media pie that serve large minority communities. Previously, being culture sensitive meant placing non-Caucasian actors in supporting roles but, belatedly, major media companies are dedicating whole TV and audio channels to specific ethnic groups. In the media industry, the basic services were the largest sector, accounting for 53.1% of sales in 2004, worth US$30.6 billion. Advertising was the most dynamic sector. Growing from US$8.5 billion in 2000 to US$15.9 billion in 2004, this sector achieved 87% growth. Pay-per-view movies grew by US$400 million over the review period, to account for 2.8% of sales in 2004. In 2004, premium channels accounted for US$9.5 billion, or 16.5% of the market, realizing 13% growth. Cable TV continues to dominate the premium TV market with 76 percent of households and its market penetration is still increasing. Table 2. United States – Media Market Sectors US$ billion 2000 2004 Advertising 8.5 15.9 Basic services 24.1 30.6 Pay-per-view movies 1.2 1.6 Premium channels 8.4 9.5 Source: Euromonitor International In terms of performance, Comcast Corporation was the leader of cable and satellite TV services in the United States in 2004 with 32% market share. It maintained its leading position through product innovation and differentiation including its ON DEMAND offerings, increased regional sports programming and its leading Comcast.net portal. Time Warner Inc had the second largest market share in 2004 at 17.2%. This was an increase of 9.5% in 2003. AOL Time Warner was able to increase its position by taking a lead role in offering new products to its customers including High Definition Television, the Digital Video Recorder, Wireless Home Networking, and Digital Telephony service. Through expansion of its US market, Cox Communications Inc. increased its market share by 7.7% from 2003 to 9.7% in 2004. Charter Communications saw its market share decrease to 9.3% in 2004. Table 3. United States – Media Market Share % value of market sector 2004 Comcast Corporation 32.0 AOL Time-Warner Inc 17.2 Charter Communication 9.3 Cox Communications Inc 9.7 Adelphia 8.2 Source: Euromonitor International In the global arena, Hollywood’s long-standing tensions with China has taken its toll as Time Warner is pulling out of an ambitious, four-year theater venture in the country because of tightened restrictions on foreign ownership. The decision was announced in November 2006 came after its Warner Bros. unit tried unsuccessfully for more than a year to negotiate a compromise with the Chinese government over a July 2005 ruling requiring outside investors to cede control in ventures to their Chinese partners. Warner’s decision underscores Hollywood’s frustrations operating in China. Although studio executives consider China to be the world’s best growth opportunity for U.S. entertainment, they also are wary of expanding there, in part because of what they believe are burdensome government rules. Although the media market is fraught with competitors, Time Warner had been a formidable competitor because it offers diversified, yet complimentary products and services. The company operates in print media, television, cinemas, internet, cables services and wired broadband segments. Leveraging its operations in complimentary segments the company has been able to reproduce the same content in various formats to generate additional sales. Its wide product portfolio has also allowed the company to offer superior bundles to the customers. Company Analysis – Time-Warner Time Warner is one of the world’s leading media and entertainment companies. Its major businesses encompass an array of respected and successful media brands. Among the company’s brands are HBO, CNN, AOL, Time, Fortune, People, Sports Illustrated, and Time Warner Cable. CNN operates in nearly 200 countries, while AOL is the world’s leader in interactive services with 19.5 million subscribers in the US and 6 million in Europe at the last count. Time Warner’s cable business, Time Warner Cable (TWC), is the second-largest cable operator in the US while Warner Bros is one of the worlds leading studios. These are well established brands with global brand recall. The company can leverage the equity of its brands to generate sales. New developments continue to stream in Time-Warner. In 2004, Time Warner Cable announced the creation of a new business unit, Time Warner Cable Voice Services. This creation was responsible for overseeing the rollout of its residential telephone service, known as Digital Phone. During the same year, AOL Europe, and Google, announced a new multi-year agreement to provide targeted advertising from Google’s AdWords advertisers for the subscribers of AOL Europe. In February 2005, Warner Home Video announced the formation of CAV Warner Home Entertainment Company, a joint venture with China Audio Video. The company entered into a joint venture with New Line Cinema to form Picturehouse. AOL announced the acquisition of Weblogs, a blogging company. AOL also acquired an online digital music subscription company called MusicNow in November 2005. During the same month the company, along with several other cable companies concluded an agreement with Sprint. According to the agreement, the companies would form a joint venture for providing wireless and wireline entertainment product. AOL acquired Truveo, a pioneer in internet video searching in January 2006. In the same month Time Warner entered into an agreement with CBS to launch a new television network, The CW. Cartoon Network formed a joint venture with VIZ Media to create Toonami Jetstream, to provide broadband video services in April 2006. Time Warner has been continually profitable. The company recorded revenues of $43,652 million during the fiscal year ended December 2005, an increase of 3.7% over 2004. For the fiscal year 2005, the US, the company’s largest geographic market, accounted for 79% of the total revenues. Time Warner generates revenues through its five business divisions: filmed entertainment (26.4% of total revenue during fiscal year 2005), networks (21.3%), cable (21%), AOL (18.3%), and publishing (12.9%). During the fiscal year 2005, the filmed entertainment division recorded revenues of $11,924 million, an increase of 0.6% over 2004. The networks division recorded revenues of $9,611 million in fiscal year 2005, an increase of 6.2% over 2004. The cable division recorded revenues of $9,498 million in fiscal year 2005, an increase of 12% over 2004. The AOL division recorded revenues of $8,283 million in fiscal year 2005, a decrease of 4.7% from 2004. The publishing division recorded revenues of $5,846 million in fiscal year 2005, an increase of 5% over 2004. By geography, the U.S. remains Time Warner’s largest geographical market, accounted for 79% of the total revenues in the fiscal year 2005. Revenues from the US reached $34,469 million in 2005, an increase of 2.7% over 2004. Other international countries accounted for 6.7% of the total revenues in the fiscal year 2005. Revenues from other international countries reached $2,907 million in 2005, an increase of 4.5% over 2004. The UK accounted for 6.6% of the total revenues in the fiscal year 2005. Revenues from the UK reached $2,886 million in 2005, an increase of 15.1% over 2004. Germany accounted for 2.8% of the total revenues in the fiscal year 2005. Revenues from Germany reached $1,233 million in 2005, an increase of 6.2% over 2004. France accounted for 2.2% of the total revenues in the fiscal year 2005. Revenues from France reached $941 million in 2005, an increase of 7.1% over 2004. Canada accounted for 1.4% of the total revenues in the fiscal year 2005. Revenues from Canada reached $625 million in 2005, an increase of 24.3% over 2004. Japan accounted for 1.4% of the total revenues in the fiscal year 2005. Revenues from Japan reached $591 million in 2005, a decrease of 13.7% from 2004.    Financial Statement Analysis Company Posted Sales Fiscal Year Total Sales 2003 39565 2004 42089 2005 43652   Profitability Ratios 2007* 2006* 2005 2004  Ã‚  Ã‚  Ã‚   2003   Sales Gross Margin Operating Margin (%) Pre-Tax Margin (%) Net Profit Margin (%) Accounts payable Net Expenses Inventories Revenues per share Cash-Flow per share Earnings per share 46500 0.4 30    8.3          12.9 3 1.1 44900 0.4 30    11.4          11.8 3.12 1.35 43652 0.043 26.06 9.37 6.65 1,380,000 13,676,000 1,806,000 9.705 1.374 0.62 42089 0.042  Ã‚  Ã‚      32.008 11.66  Ã‚  Ã‚   7.99 1,494,000   13,094,000   1,737,000   9.354 2.076 0.68 39565 0.041   30.68 11.42   6.67   1,629,000 12,559,000 1,390,000 9.03 2.024 0.68 *Projected (Source: Valueline Investment Survey). Time-Warner remains to be an otherwise bright entertainment conglomerate. The company’s networks and cable segments have been posting consistent revenue growth in recent years. Revenue from the networks segment increased from $8,434 million in 2003 to $9,611 million in 2005. Revenue from the cable division increased from $7,699 million in 2003 to $9,498 million in 2005. These two segments together contribute more than 42% of the total revenues of the company. Increasing segmental revenues have contributed in the company’s overall revenue growth of 3.7% in fiscal 2005 over fiscal 2004. After trying to devise a way to maintain AOL’s subscription service in a high-speed world, management finally threw in the towel and decided to give AOL’s services away for free, focusing on advertising revenue. The move may have been late, but not so late that it won’t help stem AOL’s user base. The big concern is if advertising revenues will be sufficient to offset subscription losses. Still, this property is an important part of the company’s overall collection of media-related businesses. Moreover, the performance of the filmed entertainment segment and AOL segment has been weak in the past three years. Revenue from the filmed entertainment segment grew by as little as 0.6%. Revenues from the AOL segment declined from $8,598 million in 2003 to $8,283 million in 2005, representing a growth rate of -2%. The two segments contribute around 45% of the total revenues of the company. A weak operating performance by these segments indicates that the company has been losing ground to its competitors. The reason for this projection and forecasts is that TWX remains to be garnering operating profit. Although net profit have declined in fiscal 2005 compared to fiscal 2004, operating profits and net profits declined 26.7% and 13.6% respectively in fiscal 2005. The company’s operating margin declined from 14.6% in fiscal 2004 to 10.4% in fiscal 2005, while the company’s net profit margin declined from 8% to 6.6% in the same period. Declining profit margins indicate increasing costs and can adversely affect the company’s long term financial position. Declining cash from operating activities Time Warner’s cash flows from operations have been declining in recent years. Cash from operations have declined from $6,601 million in fiscal year 2003 to $4,965 million in 2005. Declining cash flows can force the company to borrow external capital to fund its growth plans, which could prove to be expensive. TWX Dividend Rate Per Share ($) Shares Outstanding (M) Ave. Daily Volume (M) Beta Shareholders Market Cap ($M) Institutional Holdings (%) Yield (%) 12-month P/E   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   0.22 3972.58 23.44 2.0329 56,500 83066.5 72 1 24.6 We can use the dividend discount model to estimate the cost of common stock. The difference between common stock and preferred stock is in our assumption about the growth pattern of future dividends. With common stock, we typically assume that dividends grow at a constant rate into perpetuity. Then we can write the present value of the assumed dividend stream as P 0   = D 1 ————————- ( 1 + k s ) where, P0 = the common stock price per share. D 1 = the dividend per share one year from now. ks = the required rate of return on common stock. If we solve for ks, we get: ks   = D 1 ————————- P0 At present, TWX’s stock price was at $20.14. TWX has historically paid out about 40 percent of its earnings as dividends. Therefore, with a forecast of about $0.55 per share in earnings for next year, TWX’s dividend would be forecast to be $0.55 Ãâ€" .40 = $0.22 per share. So, the dividend yield, defined as D1/P0, is $0.22/$20.14 = .0109, or 1.09 percent. TWX’s Key Growth Rates and Averages Past Growth Rate (%) 1 Year 3 Years 5 Years 9 Years Sales 3.71 2.56 29.05 49.46 Net Income -9.47 Ratio Analysis (Annual Average) Net Margin (%) 6.65 7.41 LTD of Capitalization (%) 19.48 21.01 20.71 21.15 Return on Equity (%) 4.71 5.32 8.18 Pricing/Earnings Recent Price 20.14 P/E Ratio 15.612 P/E (Trailing) 14.183 P/E (Median) NMF Rel. P/E Ratio 0.724 Ratings Financial Strength B++ Stock’s Price Stability 40 Price Growth Persistence 20 Earnings Predictability 20 Relative Value Year 2000 2001 2002 2003 2004 2005 2006 2007 2008 Free CFs 11.8 12.8 13.5 13.9 16.0 17.9 20.1 22.5 PV of FCFs 10.17 9.58 8.65 7.68 7.62 7.36 7.10 WACC = 16% Long run g = 12% MV of Debt = $202 million No. of shares = 50 PV of FCF1-7   = 50.97 TV at Year 7 of FCF after Year 7 = FCF8/(WACC – g) = $448.00 PV at of TV at Year 0 = TV/(1+WACC)7 = 183.88 Sum = Value of the Total Corporation = $234.85 million Less: MV of Debt and Preferred = $202 million Value of Common Equity = 32.85 Divide by No. of Shares = 50 Value per Share = Value of Common Equity/No. Shares = $0.66 Assuming that beginning in the fourth year, the free cash flows are to grow by 10% less than previously predicted:    Year Old FCF New FCF 1 2001 $11.8 $11.8 2 2002 $12.8 $12.8 3 2003 $13.5 $13.5 4 2004 $13.9 $12.5 5 2005 $16.0 $14.4 6 2006 $17.9 $16.1 7 2007 $20.1 $18.1 8 2008 $22.5 $20.2   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   We will assume that the long-term growth rate and WACC will be the same as previously assumed. From this information, we can do the following calculations. Total PV of New FCF’s, Years 1-7 =$55.09 FCF 8   = $20.23  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   $20.23 TV   at Year 7: $505.76  Ã‚  Ã‚   = ——————- PV of TV: $178.95  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚     Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   4%  Ã‚   = WACC- gL Market Value of Total Company =   $234.05 Less: MV of Debt = $202 Market Value of Equity = $32.05 No. of Shares = 50 Value Per Share = $0.64 versus $ 1.37 under original assumptions. Therefore, a 10% reduction in some of the cash flows leads to a 53.28% decline in the value per share. As of September 30, 2006, TWX had net debt of $202 billion (including $11 billion on the Adelphia deal), and a net debt/EBITDA ratio of about 3.0X. In 2005, TWX paid out $2.8 billion related to a government settlement. Including the acquired systems, management sees low double-digit adjusted EBITDA growth in 2006 (off a restated base of about $10 billion in 2005), with 35% to 45% conversion of EBITDA into free cash flow. Management plans about $1 billion of cost cuts in 2006 and 2007 (excluding the $1 billion of cuts at AOL as previously mentioned). We project free cash flow of over $11 billion in 2006 and 2007 combined. Pursuant to a $20 billion share buyback program, TWX plans to repurchase about $15 billion of its shares in 2006, and the remainder in 2007. Over the longer term, the company targets a 3X leverage ratio. TWX began paying a quarterly cash dividend of $0.05 per share on its common stock in the 2005 third quarter (about $900 million a year), raising it to $0.055 in July 2006. TWX would also receive about $600 million in cash from the dissolution of its cable joint venture with Comcast. TWX undertook several asset divestitures in the past few years to enhance its financial flexibility, notable among which are the 2004 sale of its Warner Music Group (for $2.6 billion in cash), a 50% stake in Comedy Central ($1.2 billion), a DVD/CD manufacturing business ($1 billion), and two NBA and NHL professional sports teams (undisclosed). Also, in 2006, TWX sold its book publishing business for $532 million in cash, and its Turner South network for about $375 million in cash. TWX also raised $239 million from the sale of stock in Time Warner Telecom.

Friday, August 30, 2019

Ancient Chinese Contributions

The world owes a lot to the Chinese for all the major contributions and innovations they introduced. For example, during the Era of Disunity (approx. 220-581 AD) the ancient Chinese invented kites, matches, umbrellas and much more (â€Å"Inventions,† â€Å"n. d. †). The Yuan dynasty brought us paper money, blue and white porcelain and several other contributions (â€Å"Inventions,† â€Å"n. d. †). The discovery of making gunpowder came from the Tang dynasty (200AD) (â€Å"Inventions,† â€Å"n. d. ); the list goes on. The most significant contributions came from the Han dynasty (approx. 202 BC-220 AD) introducing moveable rudder and sails, cast iron technology, wheel barrow, and the hot air balloon (â€Å"Chinese culture,† 2007-2011). More importantly, the Han dynasty brought to the world the manufacturing of paper, the compass and the production of Chinese silk (â€Å"contributions,† 2003-2012). The four most ingenious or innovative c ontributions are paper, the compass, printing and silk. Europeans thought of Chinese silk as elegant and traders would pay the same weight in gold for this high commodity. Silk was traded along the â€Å"silk road†, another ancient Chinese innovation, which stretched from the Yellow River valley to the Mediterranean, nearly five thousand miles long (Sayre, 2011, p. 224). The silk road was the doorway to the spread of ideas, religions and technologies to the rest of the world. The ancient Chinese taught the world how to harvest silk from silk worms along with paper making, glass making and printing. The first printing technique put to use was block printing, a very lengthy process, from the ancient Tang dynasty. Much time and labor went into block printing, but once the carved block is finished, the advantages of high efficiency and large printing amount made it very worthwhile (â€Å"Chinese culture,† 2007-2011). The printing technique was enhanced with moveable type printing during the Song dynasty by the inventor Bi Sheng. Moveable type printing greatly boosted printing efficiency by reducing block making time. Other advantages were, moveable type was smaller and easier to store and can also be used repeatedly, saving materials (â€Å"Chinese culture,† 2007-2011). We wouldn’t need printing techniques if we did not have the creative invention of paper, also brought to us by the ancient Chinese. Before the invention of paper, characters were written on animal bones, turtle backs or stones (â€Å"Chinese culture,† 2007-2011). The Han dynasty produced paper from fibrous hemp, which later, improvements in technique and quality introduced by Cai Lun were made using silk rags, hemp and tree bark. His method, although now simplified, is still used today (Sayre, 2011, p. 226). It is hard to imagine the world without this ingenious invention. Everything we learn comes from some form of media printed on paper, whether it’s a book, magazine, newspaper, encyclopedia or journal. Can you imagine all of us walking around with our clay I Pads? The compass is another great contribution to the world by the ancient Chinese. It was used primarily for religious purposes to determine if a building being constructed was facing the right direction so it could be in perfect harmony with nature. The early compass resembled a wooden circle which had a number of marks on it along with a magnetic spoon on the top (â€Å"contributions,† 2003-2012). Today’s compass is probably the most important navigation tool we have. A mariner wouldn’t dare set out to sea without a compass, nor would a pilot take a flight without a compass, for fear of getting lost. Of all the many contributions given us by the ancient Chinese, the one I could not live without would be the combination of printing and paper. How would I learn without being able to research a book or reference an encyclopedia? In my career, it takes a reference manual to complete a project or task safely and properly. I would miss being able to sit down and read a relaxing novel or magazine in my spare time. I just cannot imagine not having this wonderful contribution. I praise the ancient Chinese for all they have given us.

Thursday, August 29, 2019

Psy240 Final Analyzing Psychological Disorders Essay

You are interviewing for a psychologist position with a top company. After your face-to-face interview with the team, they have provided you with two additional assignments—Part A and Part B below, which will complete the interview process: * Part A: A psychologist understands how biology can affect psychological activities and disorders. In your interview, you are asked about your understanding of the causes and treatment(s) of schizophrenia. In your reply, discuss the following: * Areas of the brain affected * Causal factors * Associated symptoms * The neural basis * Appropriate drug therapies * Part B: Part B of the interview consists of interpreting some case studies from a biopsychologist’s perspective. You are given four different case studies of disorders and have the option of choosing two out of the four case studies to analyze. * Write a 1,750- to 2,100-word paper in APA format containing the following: * Introduction * Part A of the interview process. * Part B of the interview process: * Choose two of the four case studies presented in Appendix A. * Discuss your understanding of the problem presented in each of the two case studies from the perspective of a biopsychologist. * Include each problem’s relation to the nature-nurture issue and any relevant portions of the Basics to Biopsychology text. * Use a minimum of five outside resources, including at least 3 peer-reviewed articles. * Apply any helpful drug interventions or solutions. * Discuss the positive or negative aspects of these drug interventions or solutions and * Conclusion

Stereotypes of Gender, Race and Class Essay Example | Topics and Well Written Essays - 1250 words

Stereotypes of Gender, Race and Class - Essay Example The ages of women mostly depicted in the show are those in their 20s and 30s, and the ideal look is someone who is sexy and sleek. The black American woman was, however, once treated inferior to the white woman, and was once associated with slavery, especially in the early 20th century even years after the Civil War, and that their black hair is even â€Å"part of the legacy of slavery† (My Black is Beautiful, Episode 3). Aside from these, no other such comments are given regarding African-American women, as the whole show seems to be one that empowers them. In fact, although neither the host of the show or the panelists said it, the show was somehow aimed at African American women in the United States who somehow still have the inferiority complex due to their skin color. The show is actually almost 99% positive, praising the Black American woman in every way possible – through their intelligence, physical appearance, and natural charm. Nevertheless, the mere presence of a special video presentation as this means that there is actually stereotyping of African-American women. Admittedly, the stereotype of an African-American woman, especially outside the United States, maybe that of someone who is inferior not only because of skin color but also because of both the dark skin color and the fact that they are women. Black American women, therefore, although they do not explicitly say it, may actually be facing discrimination all the time from those who look down on both women and dark-skinned people. This may even actually be the ma in reason for having such a TV show on empowering black women. In fact, the show is full of comments which are  geared to uplift the status of black women. Most of these comments like â€Å"We make any color work regardless of what color that is,† or â€Å"We come in different shapes and sizes,† are actually also true of any other race of women (My Black is Beautiful, Episode 1).

Wednesday, August 28, 2019

Cross-cultural communication and classroom ecology Essay - 1

Cross-cultural communication and classroom ecology - Essay Example rÐ µligion, disÐ °bility Ð µtc Ð °s wÐ µll Ð °s bÐ µing mindful of thÐ µ difficultiÐ µs thÐ °t somÐ µ groups cÐ °n fÐ °cÐ µ Ð °nd Ð µnsuring thÐ °t Ð °ny obstÐ °clÐ µs to thÐ µm Ð °rÐ µ rÐ µmovÐ µd. PÐ µrhÐ °ps surprisingly, it doÐ µs not mÐ µÃ °n trÐ µÃ °ting Ð °ll Ð µquÐ °lly. For Ð µxÐ °mplÐ µ, Ð µquÐ °lity of opportunity will not nÐ µcÐ µssÐ °rily bÐ µ Ð µnsurÐ µd if thosÐ µ who spÐ µÃ °k Еnglish Ð °s thÐ µir sÐ µcond or third lÐ °nguÐ °gÐ µ Ð °rÐ µ Ð °ssÐ µssÐ µd Ð µquÐ °lly Ð °gÐ °inst thosÐ µ who spÐ µÃ °k it Ð °s thÐ µir first Ð °nd only lÐ °nguÐ °gÐ µ. ThÐ µ formÐ µr mÐ °y nÐ µÃ µd Ð °dditionÐ °l cÐ °rÐ µ Ð °nd tÐ µÃ °ching if Ð °ssÐ µssmÐ µnt Ð °gÐ °inst othÐ µrs in thÐ µir yÐ µÃ °r is to hold mÐ µÃ °ning. In this rÐ µspÐ µct, simply rÐ µmoving obstÐ °clÐ µs from thÐ µ pÐ °th of Ð °ll studÐ µnts mÐ °y not bÐ µ Ð µnough to providÐ µ Ð °ll with Ð µquÐ °lity of opportunity. PositivÐ µ Ð °ction (somÐ µtimÐ µs rÐ µfÐ µrrÐ µd to Ð °s positivÐ µ discriminÐ °tion) mÐ °y bÐ µ nÐ µcÐ µssÐ °ry. This involvÐ µs crÐ µÃ °ting thÐ µ circumstÐ °ncÐ µs in which Ð µquÐ °lit y of opportunity cÐ °n Ð µxist, rÐ °thÐ µr thÐ °n lÐ µÃ °ving it to chÐ °ncÐ µ. For Ð ° physicÐ °lly— disÐ °blÐ µd studÐ µnt, prÐ °cticÐ °l chÐ °ngÐ µs to thÐ µ school Ð µnvironmÐ µnt Ð °rÐ µ nÐ µÃ µdÐ µd (such Ð °s rÐ °mps instÐ µÃ °d of stÐ µps). Ð  pupil struggling with Ð °ttÐ µntion dÐ µficit disordÐ µr cÐ °n find this disÐ °dvÐ °ntÐ °gÐ µ diminishÐ µd or Ð µliminÐ °tÐ µd if thÐ µy Ð °rÐ µ sÐ µÃ °tÐ µd Ð °s closÐ µ to thÐ µ tÐ µÃ °chÐ µr Ð °s possiblÐ µ Ð °nd surroundÐ µd by positivÐ µ rolÐ µ modÐ µls. For clÐ °ssroom tÐ µÃ °chÐ µrs, thÐ µrÐ µ Ð °rÐ µ Ð ° fÐ µw quÐ µstions thÐ °t cÐ °n bÐ µ focusÐ µd on pÐ µriodicÐ °lly to bring Ð °ttÐ µntion to thÐ µ issuÐ µ of Ð µquÐ °l opportunitiÐ µs. ThÐ µ goÐ °l is not to work slÐ °vishly to thÐ µ nÐ µÃ µd for Ð µquÐ °lity of opportunity but to dÐ µvÐ µlop Ð ° nÐ °turÐ °l instinct for sussing out whÐ µn pupils mÐ °y bÐ µ inÐ °dvÐ µrtÐ µntly disÐ °dvÐ °ntÐ °gÐ µd. This might bÐ µ Ð ° rÐ µsult of thÐ µ work you hÐ °vÐ µ Ð °skÐ µd thÐ µm to do or of thÐ µir intÐ µrÐ °ctions in your clÐ °ssroom. WÐ µ Ð °ll hÐ °vÐ µ prÐ µjudicÐ µs – its pÐ °rt of intÐ µrÐ °cting with humÐ °ns. But bÐ µing Ð °wÐ °rÐ µ of your prÐ µjudicÐ µs

Tuesday, August 27, 2019

Iran hostage crisis and its effect on Iranian American Immigrants Essay

Iran hostage crisis and its effect on Iranian American Immigrants - Essay Example The takeover was planned by a student named Ebrahim Asgharzadeh. He invited people who shared his views to join his plan. On the morning of November 4, 1979, around 300-500 students surrounded the American embassy and took it over very shortly. The students demanded that Shah Reza must be returned to Iran, trialed and executed. Besides that, they also demanded an apology from the US for meddling unnecessarily in the internal affairs of Iran and the release of Iran’s frozen assets in the US. The takeover was intended to be only for a short while but as its popularity grew in the country and it also won Khomeini’s support, it was prolonged. There were a few rescue attempts but they failed. A number of delegations were sent to request the release of the hostages but the students insisted that their demands must be met first. The takeover resulted in transfer of 50 tones of gold from America to Iran. The hostages were released as soon as the US President, Jimmy Carter stepped down and Ronald Reagan was elected as the new President. This takeover lasted for an extraordinary length of time and resulted in strained US-Iran relations. The new situation proved very dire for Iranian Immigrants in the US. Iranians in America had excelled in business, academics and sciences. But after the revolution, the relations between both countries were strained. Iranians were considered as terrorists. They were treated as second class citizens. Their rights were not catered by the Government as compared to U.S citizens. The Iranian immigrants were ignored in every field of life. Americans showed distrust and hatred for them. Iranian immigrants were subjected to discrimination and prejudice in the U.S. instead of reactive solidarity; however some religious minorities from Iran opted to dissociate themselves from their nationality. Muslim immigrants were not provided with this option because they were largely secular and nationalistic. Even the commercial

Monday, August 26, 2019

Career Choices in Alternative Medicine Research Paper

Career Choices in Alternative Medicine - Research Paper Example Like every other career-line, alternative medicine is defined by the typical duties it entails. There is also the education that qualifies one for a career in alternative medicine. This will determine how salaries vary. As time passes by, the outlook of every job changes as the market dynamics, and needs get redefined. Some have a better outlook than others and this is dependent upon the functionality of the career in the ever-changing world. The intention of this paper is to explain the career opportunities that are available in alternative medicine. Definition Alternative medicine involves the prevention and treatment of illnesses through methods other than the traditional western ways (Malhotra). A medic dealing in alternative medicine is different from those in mainstream medicine in one way; in alternative medicine, the person is addressed as a whole, while in western medical practices, only the symptoms are treated. Depending on what the field specializes in, the practitioners of a particular field may need different types of education. Alternative medicine consists of medical practices that originate mostly from the East. It is a system of medicine that involves treating of the cause of illness rather than the symptoms that reveal themselves, by use of natural, non-toxic methods. It is traditional medicine from India, China, Japan, and other countries mainly from Asia. It is deemed to be over 5000 years old with practices proven effective over generations. They are significantly older than modern medicine which is 150 years old (Natural Health Careers – Complementary & Alternative Medicine). Until recently, though, alternative medicine was viewed as obscure and encompassed in mysticism. Their importance has, however, had to be recognized as the means involved in alternative medicine have been able to cure chronic illnesses. The ways that are widely used in alternative medicine are naturopathy, homeopathy, and Ayurveda. In addition to these methods , there are also those methods whose use is increasing in the medical field. They include yoga, reiki, and chiropracty. Then there are those methods which are rarely used, and these are methods such as Tibetan medicine, Unani, and Siddha. It is imperative that these practices be used alongside conventional western medicine. This being the case, alternative medicine can now be referred to as integrative or complementary medicine (Seitzer). RESULTS â€Å"Suitability† Profile Most of these practices involved in alternative medicine have their origin in the Eastern communities. Knowledge of any of the languages of the Eastern countries is important. Knowledge of the traditional practices and beliefs of Eastern Ancient civilizations is also helpful since it forms the basis if the different practice methods. To increase the suitability of a person in pursuing a career in alternative medicine, a person needs to have the appreciation of methods of treating ailments, other than conven tional Western medicine. Duties and responsibilities Ayurveda in not so strict terms translate to â€Å"the science of life.† Practiced in India for 5000 years, this method insists that to prevent and treat diseases, body, mind, and spirit all need to be used. It includes diets and the user of herbal remedies. Naturopathy involves numerous practices such as massage therapy, use of herbal medicine, acupuncture, exercise, dietary modifications, and minor surgery (Malhotra).  Ã‚  

Sunday, August 25, 2019

Film Studies (thinking film Essay Example | Topics and Well Written Essays - 2000 words

Film Studies (thinking film - Essay Example One cannot be condemned or belittled for saying that life often imitates art and vice versa. In fact, it's a statement of facts and contradictions that needs to be revered, understood and deconstructed in its entirety. Now really, it's not that premature to say that our past makes our future, and it's owing to this meticulous and oversensitive fashion that our life moves in that we are caught in this struggle of assessing the correlating what has happened and what is about to happen. Lights, camera, actionfrozen in time, and captured for time's keep! Sure enough, literature and informative articles and write ups give us an insight into the past events and the sands of time that have elapsed over centuries, but it's needless to say that while this past may seem suitably exciting owing to the proficient writings of our forefathers, the cinematic past too speaks clearly, indeed alternatively. Alternative Most will be baffled by the use of the term alternative used to describe cinema. Ho wever, if one sees this medium in isolation, it becomes apparent that the reason for this is because Cinema has always been an alternative to conventional wisdom and movement through the ages. It's a reflection of the time, the aspirations, and the realizations one makes in that period. Its history etched in frames, in dialogue, expressions and color. While the past seems magnificent in its appeal, it goes without saying that it reflects on the future. Cinema has seen a lot of transitions, and manifestations through the years, and its appeal remains unbeatable even now. It's got the power to stop us in our tracks, take note of the direction and the paths we have chosen for ourselves and then question possibilities for the future. While one can go on and on about cinematic brilliance, one thing that cannot escape prominence is its history and its beautiful transformation. And while we are gushing at the past it seems only right to pay tribute to the rich past that has inspired present day cinema. Robert Stam wrote, "Theories do not usually fall into disuse like old automobiles relegated to a conceptual junkyard. They do not die; they transform themselves, leaving traces and reminiscences." While Stam eloquently talks about the old giving way to the new and instigating room for experimentation in the process of this transition, what remains inspiring in all this is the cinema prior to the 1960's which raised the bar for filmmakers and technicians alike. It set the foundation from which great cinema emerged and found acceptance. The era prior to the 1960's gave us filmmakers and pioneering geniuses like John Ford, Sergio Leone, David Lean, Orson Welles, Akira Kurosowa, Stanley Kubrick, Alfred Joseph Hitchcock, and Isaac Julien. The list of filmmakers who have made a niche for themselves is long when you tread the boundaries of world cinema. These are the names of only a few who have paved the way for the new generation filmmakers to follow suit. Many theories developed from this school of thought. Isaac Julien's film, Battle of Algiers, not only thematizes the racialised and sexualized look but also provides audio visual illustrations that highlight the protagonist's angst. One can also further interpret it as a theorized orchestration of looks and glances, captured and analyzed in all their permutations

Saturday, August 24, 2019

Frustation Research Paper Example | Topics and Well Written Essays - 1250 words

Frustation - Research Paper Example llelism in the poems of Charles Baudelaire in ‘Le Fleurs Du Mal’ (The Flowers of Evil) and Fyodor Dostoevsky’s ‘The Gambler’ in how the persona of the poems and the narrator of the novel experiences frustration. The titles themselves present an almost negated perception from even before prospective readers read them. And above all both contain stories and retelling of love. Baudelaire’s suggests a dark and borderline macabre insinuation of unrequited or unfulfilled love through his liberal use of the words ‘flowers’ and ‘evil’ while Dostoevsky’s classic novel gears the reader to sympathize with the main character Alexei and his often foolish actions to gain the love of the cunning Polina. she paid me no attention; until eventually I became so irritated that I decided to play the boor† (Doestoevsky, p.5). As Alexei sounds off his resentment and goes into an almost foolish attempt to gain the audience of everyone at the dinner table, playing at his being Russian to coerce them into a conversation directed his own way. This was among the first incidences in the novel for which his folly in wanting to gain respect despite his stature among the guests was deliberately shown. He was an intelligent man but he was but a mere tutor. His knowledge in all the dirty little secrets of the aristocrats surrounding him leads to his confidence that there is some inkling for a way to balance their positions even at just the dinner table. This he also found at the roulette table. The game provided him with a way to level the playing field between him and the rich folks by winning. The reverence for the beauty of women as only devotional love signifies is present also in Baudelaire’s. â€Å"The real, true head, the sincere countenance/ Reversed and hidden by the lying face./ Poor glamorous beauty !/ the magnificent stream/ Of your tears flows into my anguished heart ;/ Your falsehood makes me drunk and my soul slakes its thirst/ At the flood from your eyes,

Friday, August 23, 2019

Assessment of National Express Group Operations, Macroeconomic Essay

Assessment of National Express Group Operations, Macroeconomic Environment and Challenges - Essay Example This essay presents an informative analysis of the National Express Group company economic position. The company was founded in 1972 by the state owned National Bus Company. The company has grown tremendously since its privatization in 1988 from a small transport service provider to a multinational firm. It currently has its operations in UK, Canada, Morocco, Spain, US and Portugal. The company has recently expanded its business in other parts of Europe also. The company currently runs more than 1600 busses across UK and is a leader in urban bus service provider. It also provides bus service in North America in which daily 1 million students go to school. National Express Group is focused in achieving excellence in their existing markets and to explore new market opportunities to drive their company to the utmost peak. Its main objective is the growth of its business through customer satisfaction Macroeconomic environment consists of those variables upon which the company has no control. Companies cannot control the macroeconomic factors but can minimize its impact by taking strategic decisions accordingly. These factors exist outside a company and usually have a great affect on the functioning of the company. In UK, the market conditions were challenging but still due to strong market hold and flexible coach supply model, the company delivered high profit for the business. As of National Express bus service in UK, there was a rise in cost for the company during the year 2009 due to the increase in the cost of fuel.

Thursday, August 22, 2019

Techniques in Immunocytochemistry Essay Example | Topics and Well Written Essays - 250 words

Techniques in Immunocytochemistry - Essay Example After that, both slides were incubated in a medium containing diaminobenzidine (DAB). The sections were mounted with a minimal volume of the aqua mount and were then visualized under a light microscope. The tissue that was incubated with primary antibody was given a blue nucleus with a brown colored background; this indicates the presence of the target antigen (Griffins, 2011). Immunocytochemistry is a common laboratory technique that makes use of antibodies to target specific antigens in a cell via specific epitopes (an epitope is the part of an antigen that is recognized by antibodies). These bound antibodies can then be detected by using many different methods. The antigen is bound by a primary antibody, which is then amplified by use of a secondary antibody. The secondary antibody can be an enzyme that is conjugated. Immunocytochemistry binds antibodies that are specific reagents and allow unique detection of proteins and molecules. It is a valuable tool for the determination of cellular content from individual cells (Gillian et al., 2011). The tissue sections of the cardiac myosin were cut and fixed in acetone. After this, the slides were unwrapped and each slide was placed in a plastic petri-dish. The two plastic Petri-dishes were labeled; one was the control and the other the antibody. A piece of paper was put underneath each slide to form a humidity chamber. The tissue sections were blocked by adding two drops. A PBS buffer containing three percent BSA was introduced and then the tissue incubated at room temperature for five minutes. After that, the excess medium was tipped and the zone around the tissue section was wiped dry with paper toweling.  

The collapse of the European economies after World War 1 Essay Example for Free

The collapse of the European economies after World War 1 Essay During the course of this essay I will discuss how America was advantaged by the collapse of the European economies after World War 1. How the policies of the Republican Government helped to surge the American economy. I will discuss how this economic boom did not benefit everyone in America and how the motor car industry helped stimulate Americas growing economy, and how luxury goods became more available in America, and I will continue with how hire purchase and credit was highly available during this time of prosperity. I will outline who did and who did not benefit from this booming economy, also how reversals in U.S. policy occurred during 1919-1922. Then I will continue to explain the McCumber traffic act which issued a tariff on foreign goods entering America to encourage Americas to purchase American goods and thus helping the economy to grow further; leading to an increase in customer spending. I will tell you how Woodrow Wilson introduced the League of Nations, and how the USA isolated itself from the international community so as to avoid conflict. I will look at how Americas vast amounts of natural resources were a contributing factor to the growth of the economy. Before the war, Germany had the largest chemical industry in the world but after the war it was significantly damaged and America took the place of the Germans in this industry, which greatly improved Americas economy. They also took over European trade. Europe was on its knees after the war so they borrowed money from the U.S. This provided the U.S. with a good regular source of revenue. The American economy was running away with itself. This was due to the explosion of the car industry. Henry Ford was a car manufacturer. He came up with the idea of the first production line. This meant that different jobs were allocated to different people and in different stages, meaning production was more effective. The car industry used up to 80% of Americas steel 75% of the glass in the U.S. and 65% of leather and rubber. By the end of the 1920s, the motor car industry was the biggest industry in America. It also employed hundreds of thousands of workers directly. It kept many people in other industries employed. Petrol was needed to run the car which brought about a new branch of businesses, which branched off from the car industries; petrol stations, the road building industry, motels, roadside diners, billboards and mechanic services were just some of these new businesses. Road construction was the biggest single employer in the 1920s. Owning a car was no longer a privilege reserved for the rich. The production line had mad making cars cheaper, so more people could afford them. There was one car to every five people in the USA, compared with one to 43 in Britain and one to 700 in Russia. The car made it possible for more people to buy houses further from the cities. This boosted the house building industry as the American economy grew, more people spent money on luxury goods, this lead to such goods becoming more available in America and more companies making them. Telephones, radios, vacuum cleaners and washing machines were mass produced on a vast scale making them cheaper. New electrical companies such as Hoover became household names; they used the latest most efficient techniques proposed by the industrial efficiency movement. At the same time, the larger industries used sophisticated sales and marketing techniques to get people to purchase their products. Mass nationwide advisements were used for the first time in the U.S. during the war to get Americans to support the war. Many of the people had learned their skills during the war and had now set up agencies to sell cars, cigarettes, clothing and other products. Poster advertisements, radio advertisements and travelling salesmen encouraged America to spend. Even if they did not have money people could now borrow it easily or they could take advantage of the new buy now pay later hire purchase schemes. By this time, the car industry was flourishing; the most famous car produced was the model T. More than 15 million where produced between 1908 and 1925. In 1927 they were produced at a rate of one every ten seconds. In 1929, 4.8 million cars were made. The boom in the American economy was helped by the republican policies from 1920 to 1932. All the U.S. presidents were republicans and republicans also dominated congress. Republicans believed that government should interfere as little as possible in the everyday lives of the people. This attitude is called `laissez-faire`. They believed the job of the president was to leave the business to the businessmen. The republicans believed in import tariffs which made it expensive to import foreign goods. For example, in 1922, Haring introduced the Fordney-McCumber tariff which made imported food expensive in the USA. These tariffs protected businesses against foreign competition and allowed American companies to grow even more rapidly. The USA also began closing its borders to foreign immigrants. Taxation was kept as low as possible this brought some benefits to ordinary working people. But it brought even more to the rich. The republicans thinking was, the more money people had, the more they would spend in America and the wealthy would re-invest in America. They also allowed the development of trusts. These were huge super-corporations which dominated industry. Woodrow Wilson and the democrats had fought against trusts, because they believed it was unhealthy for men such as Garnegie (steel) and Rockefeller (oil) to have almost complete control of one vital sector of industry. The republicans allowed the trusts to do what they wanted, believing that the captains of industry knew better than politicians did. However, this time of prosperity in America was not felt by the whole population. Farming was at a low point. The total U.S. farming income dropped from $22 billion in 1919 to just $13 billion in 1928. There where a number of factors that contributed to these problems. After the war, Europe imported less food from the U.S. This was partly because, Europe was poor and partly due to the tariffs which stopped Europe from exporting to the U.S. farmers were also struggling against competition from the efficient Canadian wheat producers. The population of the U.S. was falling which meant there where fewer mouths to feed. At the route of all these difficulties was overproduction. This resulted in wheat being produced which simply nobody wanted. In the 1920s the U.S. farmer was each year producing enough to feed his family and 14 others. Prices dropt dramatically as desperation kicked in and farmers tried to sell their produce. Most farm prices fell by 50 per cent. Hundreds of rural banks collapsed in the 1900s and there were five times as many farms going out of business as there had been in the 1900s and 1910s. Not all farmers were affected by these problems. Wealthy Americans wanted fresh vegetables throughout the year. For most farmers, the 1920s were a time of great difficulty .and this was a major concern. About half of all Americans lived in rural areas; the difficulty affected more than 60 million Americans. Lots of Americans lost their jobs, these where largely unskilled workers, mainly immigrants.

Wednesday, August 21, 2019

Novel Planar Nanodevices for Chemical Sensing Applications

Novel Planar Nanodevices for Chemical Sensing Applications In recent years, planar electronic nanodevices have attracted much attention due to their simple architecture, ease of fabrication and low cost of manufacture. Such devices address a wide variety of applications in printed and plastic electronics industry. Using this approach a new type of sensor, which is sensitive to different chemicals, has been developed and reported here. By exploiting the unique characteristics of semiconductor asymmetric nanochannels, a highly selective and sensitive planar nano-transistor based chemical sensor has been realised which can discriminate between wide range of chemical compounds in the ambient atmosphere. The active part of the sensor device was fabricated in a single nanolithography step and was tested using variety of chemicals including polar protic, polar-aprotic and nonpolar solvents. The sensing results showed that, all three solvent categories have exhibited unique chemical signature which could be identified with increased or decreased drain current depending on the analyte used. A significant rise in transistor drain current was observed when the device was exposed to polar aprotic solvents compared to polar protic and nonpolar ones. Further it has been noticed that the exposure of the device to polar protic solvents which has hydroxyl (–OH) functional groups in their molecular frame work has shown very high hysteresis in current voltage measurements. In contrast, the device has exhibited very little hysteresis when exposed to polar aprotic and non-polar solvents with later being the minimum of all. The effect of solvent’s polarity on the sensor’s drain current in terms of adsorption and desorption processes has been studied and reported here. Also the effects of water molecules in ambient air and hydroxyl groups on the device hysteresis behaviour have been investigated. As the gas sensing properties of the sensor are related to the chemisorption of gaseous species at its surface, a detailed understanding of the charge transfer in a chemisorption process is very important; hence most of the discussions in this report focus on explaining this complex phenomenon with a special emphasis on the role of surface states during sensing process. All the measurements were performed at room temperature and the responses were found to be very fast, reversible and reproducible over many cycles of vapour exposure and suggested the stability of the device to be very high. The simple, low-cost, multi-chemical sensing device described in this work could be useful for a variety of applications, such as environmental monitoring, sensing in chemical processing plants, and gas detection for counter-terrorism. Nanofabrication and Characterisation 4.1 Introduction Recent advancements in the area of micro/nanofabrication have created a unique opportunity to manufacture nanometer-sized structures with absolute precision that has wide range of applications ranging from electronic, optical, chemical and biological fields. (Springer Handbook of Nanotechnology Bhushan, Bharat (Ed.)  2nd rev. and extended ed., 2007, XLIV, 1916 p. 1593 illus. in color. With CD-ROM., Hardcover ISBN: 978-3-540-29855-7 This chapter will introduce two of such major top-down fabrication techniques namely photolithography and e-beam lithography followed by a brief description on atomic force microscopy and scanning electron microscopes which have been used in this project to fabricate and image the planar nanosensors reported in chapter 5. 4.2 Lithography In semiconductor processing area-patterning techniques are very important. Lithography is a process of transferring patterns from medium to the other ( ampere a. tseng, kuan chen, chii d. chen, and kung j. ma ieee transactions on electronics packaging manufacturing, electron beam lithography in nanoscale fabrication: recent developmentvol 26, no 2, april 2003 pp 141-149). These transferred patterns are then subjected to a development process that selectively removes either the exposed or unexposed resist depending on the resist nature. The positive resist removes the exposed part where as unexposed resist is developed away using negative resists as shown in the figure 4.1. The exposure systems may be any of these; ultraviolet light rays, X-rays, ion beams or electron beams. But this section focuses on the systems using ultraviolet and electron beams as their source. 4.2.1 Photolithography Photolithography is the most common patterning method, by which the shape and critical dimensions of a semiconductor device are transformed onto the surface of the wafer (got from lecture notes titled photolithography sly). This is the technique used to define the mesa structures and metallic contacts of the device described in this thesis. A photo sensitive resist is spun on to the substrate and exposed through a mask which transfers the patterns on the sample by means of UV light. Then the sample is developed to get the desired pattern as shown below. Figure 4.1. Typical photolithography process. The substrate (A) is  ¯rst coated by photoresist (B) and then exposed by UV radiation through a mask (C). The latent image is either removed (D) or  ¯xed (E) by a developer solution. Source : M. J. Madou, Fundamentals of microfabrication, 2nd ed., CRC Press (2002), p. 19. 4.2.2 Metal film deposition In order to perform electrical measurements on the device, we need to define the metal patterns, through which it can be connected to the electrical probe station.  So two contacts are formed, ohmic and schottky contacts through a process called lift off as shown in the figure 4.2. The GaAs substrate is coated with photoresist and the patterns are defined by photolithography. First the metal film is thermally evaporated and the unwanted metal laying on the resist is lifted off by dissolving the photoresist in acetone. To facilitate the ‘lift off’ of technique, photoresist edges with undercut profiles are desirable. This can be achieved by the treatment of photoresist with chlorobenzene before the UV exposure. Chlorobenzene swells inside the photoresist and makes its â€Å"skin† harder. After the exposure and the development, the profile of the photoresist edges forms an undercut [M. J. Madou, Fundamentals of microfabrication, 2nd ed., CRC Press (2002), p. 19. an d M. Hatzakis, B. J. Canavello, and J. M. Shaw, IBM J. Res. Develop. 24, 452 (1980).,], as shown in Fig. 3.3E. Source fundamentals of micro fabrication book Figure 4.2. Typical lift-of process. The substrate (A) is coated by photoresist (B) and then prebaked to partially dry the solvents (C). A dip in chlorobenzene follows to make the photoresist skin harder. (D) UV exposure through the mask. The edges of the patterns developed into the photoresist after such process show a typical undercut pro ¯le (E). The metal is evaporated onto the sample, forming a thin  ¯lm (F). The unwanted metal is then lifted o ® by dissolving the remaining photoresist in a solvent (G). Ohmic contacts (obeys Ohm’s law, linear I-V) They are essentially formed by a metal layer deposited on a highly-doped semiconductor. Because of the high-doping concentration a very thin Schottky barrier is formed, and the charge carriers, namely electrons and holes, can easily tunnel through. The substrate used in this research work consists of semiconductor heterostructures in which a two-dimensional electron gas (2DEG) was confined between undoped GaAs and doped AlGaAs layers. (R. Williams, Modern GaAs processing methods, Artech House (1990), Chapter 11.) The choice of metals for any given application will depend on conductivity, thermal stability, adhesion, nature of electrical contact with semiconductor (work function/barrier height), and ease of patterning. (got it grom sly lecture notes) A thin layer (~ 45-50 nm) of Au/Ge/Ni alloy which is the most common scheme for making alloyed ohmic contacts to n-type GaAs is used for this work and was evaporated onto the substrate surface at temperatures higher than 360 °C. In this alloy, the germanium diffuses into the GaAs and acts as a dopant, while nickel acts as a wetting layer and also assists the diffusion of Ge into the GaAs. Schottky contacts (rectifying, diode like I-V) Depositing a metal film on an undoped, or lightly n-doped, semiconductor whose electronic affinity is lower than the work function of the metal, will form a thick schottky barrier which is typically several hundreds of meV high, and the thermal energy gained by the electrons, about 26 meV at room temperature, is too low to permit thermionic emission over the barrier. (R. Williams, Modern GaAs processing methods, Artech House (1990), Chapter 12). When a bias is applied to the metal, the height of the energy barrier seen by the electrons injected from the metal into the semiconductor does not change, being fixed by the metal work function and the electronic affinity of the semiconductor. On the other hand, the barrier seen by the electrons injected from the semiconductor into the metal is increased/decreased by a negative/positive bias. This mechanism is responsible for the well-known rectifying effect observed in Schottky junctions [V. L. Rideout, Thin Solid Films 48, 261 (1978). [8] A. M. Cowley and S. M. Sze, J. Appl. Phys. 36, 3212 (1965).]. At negative biases, the Schottky junction essentially behaves like a capacitor: in substrates with embedded 2DEGs, it can be utilised as a gate electrode to modulate the 2DEG carrier concentration, e.g., for the fabrication of field-effect transistors. 4.2.2 Electron beam lithography (EBL) One of the modern approaches in dealing with nanoscale structures is e-beam lithography in which, electrons are accelerated by very high voltage, typically of 10s of kV and then focused onto a layer of polymer to create very fine patterns. EBL provides much higher resolution and more precise than photolithography or x-ray lithography: patterns with feature sizes well below 20 nm can be achieved in modern systems. EBL does not require the fabrication of masks as in the photolithographic process. There are two methods to expose e-beam on to the substrate surface (Rainer waser (Ed.) nanoelectronics and information technology, WILEY-VCH chapter 9, pp 234-236) 2005. Direct writing Projection printing Direct writing is the most common EBL approach and used for fabrication of the device reported here. In this approach, a beam of electrons directly impinges on the resist to form the pattern in a serial fashion. As shown in the figure 4.6, a direct writing system consists of a source of electrons, a focusing optics set, a blanker to turn on and off, a deflection system for moving the beam, and a stage for holding the substrate. Where as projection printing is used to project entire pattern simultaneously on to the wafer and can be divided into two ways; SACLPEL (scattering with angular limitation in projection electron beam lithography) and PREVAIL (projection reduction exposure with variable axis immersion lenses). However we will only concentrate on direct writing technique. Fig 4 dose test patterns of an array of self switching diodes (SSDs) fabricated using e-beam direct write. System configuration Figure 4.1. Simplified structure of a SEM column. The blue lines show the trajectory of the electrons. 4.2.4 E-beam process and proximity efect To perform electron beam lithography, PMMA (polymethyl methacrilate) resist was used which can be chemically changed under exposure to the electron beam. Final resolution of patterns in the e-beam resist and their eventual transfer into the substrate can be affected due to the imperfections in electron optics, the magnetic environment interaction, the overall thermal stability, the interaction between the beam and the substrate all play an equally important role in determining the ultimate system performance. When the electron beam strike the polymer film or any solid material, it losses energy via elastic and inelastic collisions collectively know as electron scattering. Elastic collisions change the direction of electron scattering, where as inelastic collisions lead to energy loss. As the electrons penetrate though the resist into the substrate, some of them undergo large angle scattering leading to undesired exposure that form backscattering. This causes additional exposure in the resist and is known as proximity effect. The magnitude of electron scattering depends on the density of the resist and substrate as well as the velocity of the electrons or the accelerating voltage (guozhong cao, nanostructures and nanomaterials, imperial college press, 2004, pp 280-300). (m.a. McCord and m.j.rooks, handbook of microlithoghraphy, micromachininbg and microfabrication, p.rai-choudary, Ed. Bellingham, WA:SPIE Optical engineering, 1997, ch 2, pp 139-249). The proximity effect is more severe in dense patterns, particularly when the separation between adjacent structures is less than 1ÃŽ ¼m. Since the amount of backscattered electrons depends on the substrate material, a dose calibration is necessary each time different substrates and resist thicknesses are used. Electron Scattering in Resist and Substrate The scattered electrons also expose the resist! Electrons, resist and substrates The smallest thing you can write with the ebeam depends on a large number of factors. These are the spot size used, the type of resist used, the thickness of the resist, the density of the features and the substrate material. When electrons are used to expose a pattern in resist it is not a simple process. Electrons enter the resist and hit the atoms of the resist, these will either forward scatter or back scatter. Backscattered electrons from the resist will leave the resist and, in general, do not contribute to the resist exposure, forward scattered electrons continue into the resist and contribute to the exposure. The thicker the resist the larger the forward scattering and the lower the resolution. High energy electrons (in our case 100kV) will go through the resist and deep into the substrate. Here they will again get scattered and will forward and backscatter. In this case the forward scattered electrons will be moving away from the resist and don’t contribute to the exposure, backscattered electrons from the substrate have a large contribution to the exposure. The higher the energy of the incoming electrons the deeper they will penetrate into the resist and hence the contribution to the resist exposure will be reduced. In the figure below you can see that going from 10kV to 20kV increases the penetration depth of the electrons from 1 µm to around 6 µm. At 100kV the penetration depth in Silicon is around 100 µm Figure schematic diagram of inter proximity effect and intra proximity effect The smallest feature sizes that can be achieved are when the features are isolated from one another. As you make your features closer together the backscatter from the neighbouring features will all contribute to the exposure and it will become harder to find the correct dose to correctly expose all your features. This is call proximity effects. There are 2 main effects of this; inter-proximity and intra-proximity. With inter-proximity when two features are close together the electrons from the exposure of on shape contributes to the dose of the neighbouring pattern. The larger and closer the features the worse this effect. With intra-proximity the dose in the centre of the pattern is larger than at the edges, and especially the corners. This is simply a geometric effect as there are less electrons contributing to the dose in the corners of the shape. The electrons need a path to ground. If you are using a conducting (or semi-conducting) substrate the contact with the holder is sufficient to provide a conducting path. If you are using an insulating substrate (fused glass, quartz) you will need to provide a conductive path for the electrons. This is normally done by evaporating a metal layer on top of the or underneath the resist. Aluminium or Chrome is are often good choices as they can often be easily be removed without effecting the resist, but you should check the chemical compatibility of your process with the removal procedure. Performing a Meaningful Dose Test Exposing a pattern correctly usually requires performing a preliminary test exposure referred to as a dose test.  In this test, the pattern is repeated several times on a test substrate.  Each repetition is performed at a different dose or set of doses creating a matrix of different exposure conditions.  Once the pattern is developed and pattern transfer has been performed the correct dose can be obtained through inspection in a suitable inspection tool (scanning electron microscopy, atomic force microscope, optical microscope, etc).  There are several issues which can impact the usefullness of a dose test.  Here are some guidelines: Use the same type of substrate. If there are films present on the surface of the substrate us a substrate with the identical film stack. For large arrays of features, shooting the entire array as a test is not an efficient use of time.  However, reducing the size of the array to an unrealistically small extent can give incorrent results during the test due to differences in the proximity effect.    ·Ã‚  Expose your patterns so that they are easy to locate.  For example, do not expose a test pattern consisting of a 500 micron x 500 micron array of 50 nm squares in the middle of a 150 mm wafer.  You will probably never find them.  Including some locating features (large lines or a box surounding the pattern) can help tremendously.  If you are exposing an array of patterns use as small of a repeat vector as possible.  This will make locating the entire array easier and minimize the chances of getting lost when travelling in between adjacent elements of the array. Proximity Effect As an electron from the writing beam strikes the surface of a substrate it undergoes various scattering events losing energy and causing the generation of secondary electrons.  The energy range of most secondary electrons falls between 1  and  50 eV.  Secondary electrons that are close to the substrate/resist interface are actually responsible for the bulk of the actual resist exposure process.  While their range in resist is only a few nanometers they create what is known as the proximity effect.  Simply put, the proximity effect is the change in feature size of pattern features as a consequence of nonuniform exposure.   While the dose from the primary beam may be uniform across an entire pattern, the contribution of secondary electrons from the substrate may differ depending on pattern geometry.  Two adjacent features will contribute a background dose of secondary electrons to each other resulting in a higher effective dose.  This causes a broadening of the exposed features.  This is particularly apparent with dense features (e.g. gratings).  Consequently, dense arrays of features may require significantly less dose from the primary beam to print correctly.   Pattern size can also be adjusted to compensate for this effect.  For example, 100 nm lines 100 nm apart are typically drawn in CAD as 90 nm lines 110 nm apart to get them to print correctly.  This strategy stops working at the edges and corners of patterns.  This sometimes requires the  the creation of dummy patterns or devices outside of the primary pattern region to get the main features of interest to print correctly.  One common practice is to draw a box around the pattern to normalize the dose in the primary pattern region. 4.3 Imaging nanostructures Characterisation and manipulation of individual nanostructures requires not only extreme sensitivity and accuracy, but also atomic-level resolution that leads to various microscopes that will play a central role in characterisation and measurements of nanostructured materials (guozhong cao, nanostructures and nanomaterials, imperial college press, 2004, pp 280-300). Nevertheless, when we think of microscopes, we think of optical or electron microscopes that can image an object by focusing electromagnetic radiation, such as photons or electrons, on its surface and gives the image with very high magnifications.   However, the images obtained with these microscopes can only provide the information in the plane horizontal to the surface of the object and do not give any information in vertical dimensions of object’s surface height and depth. This section deals with the imaging of surface topography and surface property measurements of planar sensor using AFM and SEM techniques, which can provide us with all necessary information in both horizontal and vertical planes. (www.afmuniversity.org/pdf/Chapter_1_.pdf pp 1-16) 4.3.1 Atomic force microscopy (AFM) AFM is a very high-resolution type of microscope from the family of scanning probe microscopy (SPM) with the resolutions thousand times the better than optical diffraction limit (http://en.wikipedia.org/wiki/Atomic_force_microscope). Unlike traditional microscopes, AFM does not rely on electromagnetic radiation to create an image. AFM is a mechanical imaging instrument that measures the three dimensional topography as well as physical properties of a surface with a sharpened probe. ((www.afmuniversity.org/pdf/Chapter_1_.pdf pp 1-16) AFM Basic principles It consists of very sharp tip attached to cantilever and is positioned close enough to the surface such that it can interact with the atomic/molecular forces associated with the surface. Then a collimated laser beam focuses onto the cantilever, which scans across the surface such that the forces between the probes remain constant. An image of the surface is then produced by monitoring the precise motion of the probe that can sense the movements as tiny as 0.1 nm. Such high resolution allows to image even single atoms, which are typically 0.5 nm apart in a crystal. Normally the probe is scanned in a raster-like pattern as shown in the figure 4. ((www.afmuniversity.org/pdf/Chapter_1_.pdf pp 1-16) Source : http://www.afmuniversity.org/index.cgi?CONTENT_ID=33 AFM probe: Cantilever and Tip AFM is a force sensor with a sharp tip used to probe the surface. When the tip at the end of the cantilever interacts with the surface, the cantilever bends, and consequently beam path also changes, causing the amount of light in the two photo-detector sections to change. Thus, the electronic output of the force sensor is proportional to the force between the tip and the sample. Tips used for probing the surface is usually made of silicon that have a radius of about 10-20 nm and can be coated by silicon nitride to make them harder, or by noble metals, such as gold and platinum, to locally probe electrical quantities or to induce chemical modifications. Optical detection and Piezo electric scanner In order to detect the cantilever movements, when the AFM is operating in ambient conditions, optical detection is used. Reflected light from the focused laser beam is collected by a photodiode and the cantilever deflection and torsion are detected as a change in the photocurrents of the photodiode elements, as shown in Fig. 4. In the typical AFM configuration the tip is kept still, and the imaging is performed by moving the sample with piezoelectric scanner also referred as piezo tube as shown in the figure 4b. By controlling the bias of one inner and four outer electrodes the piezotube can be moved in three dimensions.  This photosensitive detector measures the change in optical beam position and the change in cantilever height. Feedback control Feedback control is used in AFM for maintaining a  ¬Ã‚ xed relationship, or force, between the probe and the surface. According to the mode used, the feedback loop can be controlled either by the cantilever deflection (contact mode) or by the amplitude of the cantilever oscillation (dynamic modes). The typical feedback system used in contact mode is shown in Fig. 3.11. The feedback control operates by measuring the force between the surface and probe, then controlling a piezoelectric ceramic that establishes the relative position of the probe and surface.  Feedback control is used in many applications; Figure 2-4 illustrates the use of feedback control in an oven. Section 2.3 has a more AFM modes: Tip – sample interactions Depending on separation between tip and the sample a variety of forces can be measured by AFM. At shorter distances van der Waals forces are predominant. Where as these forces become negligible if the tip-sample distance increases.  Forces like electrostatic attraction or repulsion, current induced or static magnetic interactions comes into play at these larger separations. The tip-surface forces (approx.) is given by the following equation Fa = ΔU = 12 B/Z13 – 6A/Z7  attractive Repulsive B and A  are coffecients depend upon the surfaces involved.detectable forces for an AFM 1 nN in the contact regime and 1 pN in the noncontact regime (theory 10-18 N) (r. wiesendanger, â€Å" chapter 11. future sensors.† In h.meixner, r. jones, eds vol 8: micro and nanosensor technology /trends in sensor markets. ) Based on these interactions, AFM usually has two operational modes; contact mode and dynamic mode. Depending on resonant frequency shift of tip-sample, dynamic mode is further divided into tapping mode and non-contact mode. Imaging for this work was carried out in tapping mode. Contact mode Also called as repulsive-static mode, in which, the tip rides on the sample in close with the sample surface (low k). The force produced in the feedback loop is frictional force; hence, the tip might interact with the sample surface. Non-contact mode Also called as attractive-dynamic mode, in which the tip hovers 5-15 nm away from the sample surface. The force generated in the feedback loop is typically van der Waals forces. Applied force (dependent on height z) changes the cantilever oscillation frequency. Figure: AFM Measurement in the figure PSPD represents photosensitive detector. Tapping mode Also called repulsive-dynamic mode, in which the AFM tip taps the surface as it maps the height z. This type of mode eliminates the hysteresis due to the tip sticking on the sample. Also using this method there is less likely to damage the sample. Scanning electron microscopy Scanning electron microscopy is also one of the major techniques for imaging the nanostructures. Although AFM gives high-resolution images with absolute precision, it takes much of time to scan and image the surface area of the sample. Where by SEM can provide an alternative to AFM, which is very fat at imaging the samples in both horizontal and vertical directions. These schematics show the ray traces for two probe-forming lens focusing conditions: small working distance (left) and large working distance (right). Both conditions have the same condenser lens strength and aperture size. However, as the sample is moved further from the lens, the following occurs: the working distance  S  is increased the demagnification decreases the spot size increases the divergence angle  alpha  is decreased The decrease in demagnification is obtained when the lens current is decreased, which in turn increases the focal length  f  of the lens. The resolution of the specimen is decreased with an increased working distance, because the spot size is increased. Conversely, the depth of field is increased with an increased working distance, because the divergence angle is smaller. Comparison between AFM and SEM The AFM is more often compared with the electron beam techniques such as the SEM or TEM. With an AFM, if the probe is good, a good image is measured. (www.afmuniversity.org/pdf/Chapter_1_.pdf pp 1-16) the following comparison between AFM and SEM gives a fair idea of the capabilities for applications A comparison of the some of the major factors follows: FIGURE  1-8 Both  the AFM  and SEM measure  topography. However, both types of microscopes can measure other  surface  physical  properties.  The SEM is good for measuring chemical composition and the AFM is good for measuring mechanical properties of surfaces. Summary This chapter has covered the main processing and imaging techniques used for fabrication of nanosensor reported in chapter 5.  Patterning of metal contacts and mesa structures on to the substrate using photolithography have been discussed in detail. The mechanism for the thin film deposition of Au/Ge/Ni alloy for forming ohmic and schottky contacts have been presented followed by a brief discussion of wet etching for undercut profiles. e-beam lithography which can overcome the resolution limitation in photolithography has been introduced with a description of its basic elements followed a discussion on proximity effect. So overall, this chapter provides the reader with fundamental knowledge to understand the basic fabrication and characterisation process of which serves as a tool for better understanding the fabrication of planar nanodevices discussed in next chapter (i.e chapter 5). Bibliography Introduction The evolution of semiconductor industry has brought a revolutionary change in the way we live today. Right from the invention of germanium transistor in 1947 to the latest sensation graphene transistor, the world has seen some of the spectacular breakthroughs that the human kind had ever imagined few decades ago. In the last fifteen years, more than twelve noble prizes have been awarded for the research based in the field of nanotechnology. 1.1  Sensors and sensor science Life without sensors and sensing would be like an opera without singer or a violin without strings. Such life does not exist. Sensors and sensing, on the contrary, are basic properties of life that are responsible for the closed loop real time control of what is going on inside and how it reacts to the outside situation. From bacteria to plants and animals to human beings, all living organisms use their sensing organs for orientation and communication, for monitoring the environment and for their survival. (sensors and sensing in biology and engineering, springer wien newyork, 2003, friedrich g. barth, joseph a.c. Humphrey, timothy w.secomb pp3-34 chapter1 and 2.) Digital systems however complex and intelligent they are, must receive information from the outside world. Sensors act as an interface between various physical values and electronic circuits that ‘understand’ only a language of moving electrical charges. In other words, sensors are eyes, ears, and noses of silicon chips. Some sensors are relatively simple and some are complex, which operate on fundamental basic principles. Understanding of these devices generally requires an interdisciplinary background in fields such as physics electronics, chemistry etc. Thus, sensors research has brought a unique team of chemists, biologists, physicists electronic engineers, together on one platform, thus making it a truly interdisciplinary field. 1.1.1 The term ‘Sensor’ In this ever-changing world, sensors are becoming ubiquitous in our daily lives and play an important role in this process. Since the early 1990s, semiconductor industry has seen a tremendous growth in the development of variety of sensors. The technological trends in this field have made electronic products not only smaller and sleeker, but also more interactive and powerful.  These sensors with their improving performance–cost ratio will be the key components for the future nanoelectronic devices.(http://www.frost.com/prod/servlet/market-insighttop.pag?docid=140061375) The word sensor is derived from the Latin word sentire, which means, â€Å"to perceive†. A sensor i

Tuesday, August 20, 2019

Magnetic Resonance Imaging Essay -- Biology Essays Research Papers

Magnetic Resonance Imaging MRI is a procedure, in wide use since the 80s, to see the anatomy of the internal organs of the body. It is based on the phenomenon of nuclear magnetic resonance (NMR), first described in landmark papers over fifty years ago (Rabi et al. 1938; Rabi, Millman, and Kusch 1939; Purcell et al. 1945; Bloch, Hansen, and Packard 1946) (4 ). . The MRI is a valuable diagnostic and research tool with also practical applications for surgical planning and conquering diseases. This imaging procedure is painless and non-invasive although sometimes discomforting as the patient lies down in a body tube that surrounds them. For many years, closed MRI units have been the standard in helping physicians make a diagnosis. These closed MRI units featured a long tube that the patient would be placed inside during their procedure. This was often uncomfortable for many patients due to the "closed in" feeling and was especially stressful for patients who suffer from claustrophobia. The newest generation of MRI units is now open on all four sides which completely alleviates the "closed in" feeling, while still providing the physician with the most accurate information possible to aid in diagnosis (2).. A patient does not see or feel anything. A faint knocking sound may be heard as the machine processes information. Patients may choose to listen to music -- even having the option of bringing their own CDs to listen to. Most MRI procedures take less than an hour. MRI technology is based on three things: magnetism, radiofrequency and computers. The magnetic resonance machine, is a big and strong magnet. When the body is inside, every proton of the body is oriented in the same way (for instance, with the positive pole up). Water ... ...netic Resonance Imaging is one of the most accurate imaging modalities available today. It is an application of computer technology that has generated knowledge for the future and for practical application today. The field of imaging continues to expand as avidly pursued new dimensions in the acquisition of physiological and biochemical information occurs. WWW Sources 1) Principles of Functional Magnetic Resonance , http://www.mch.com/ 2) Consultants in Radiology , http://www.cirpa.com/Pages/OpenMRI.html 3) MIT Encyclopedia of Cognitive Sciences , https://cognet.mit.edu/login/?return_url=%2Flibrary%2Ferefs%2Fmitecs%2Fugurbil.html 4) Tracking Neural Pathways with MRI , https://cognet.mit.edu/login/?return_url=%2Flibrary%2Ferefs%2Fmitecs%2Fugurbil.html 5) MRI OF HIPPOCAMPUS IN INCIPIENT ALZHEIMER'S DISEASE http://www.uku.fi/neuro/37the.htm