Wednesday, July 31, 2019

Business risk and risk assessment: Apple Essay

I. The Company’s Core Business Processes and Strategic Objectives The Company’s products can be divided into two main categories, personal computers and related products and portable digital music players and related products. Based on the annual report, the â€Å"Company designs, manufactures and markets† (Annual Report 2005 1) many variations of the products mentioned above. The more popular products of the Company include the â€Å"Macintosh line of desktop and notebook computers, the iPod digital music player, the Xserve G5 server and Xserve RAID storage products, a portfolio of consumer and professional software applications, the Mac OS X operating system, the iTunes Music Store, a portfolio of peripherals that support and enhance the Macintosh and iPod product lines, and a variety of other service and support offerings† (1). Design is mainly a concern of the Company’s research and development. Because the Company is in the technology industry, research and development is a crucial component of its operations. It is the manner by which the Company keeps its competitive advantage. In its annual report, the Company admitted that â€Å"the Company’s ability to compete successfully is heavily dependent upon its ability to ensure a continuing and timely flow of competitive products and technology to the marketplace† (14). As a corollary issue to research and development, creation, protection and acquisition of intellectual property rights are also a major concern for the Company. The Company is in possession of several patents and copyrights. On one hand, the Company is concerned with the protection of its patent, copyrights, trademarks and service marks worldwide. In the other, it must protect itself from infringing on others intellectual property rights. The Company does not only rely on its ability to create intellectual property, it also relies on those owned by third parties which are acquired through licensing agreements. Because the Company is engaged in producing technology year after year, the manufacture of the Company’s products may create complications. The Company manufactures personal computers and accessories, iPod digital music players and accessories and a variety of consumer and business software applications. The raw materials for these products are sourced elsewhere. There are certain key components that are sourced from one or limited outside source (Annual Report 2005 14). In 2005 and 2004, the Company experienced delays in relation to one of its products, the PowerPC G5 processors (14). This led to the non-availability of certain Apple products from the market (14). After this incident, the Company announced its intention to shift its Macintosh personal computers from PowerPC G5 and G4 processors to Intel Microprocessors (Apple to use Intel para. 1). This transition is expected to be fully implemented in 2007. The Company’s development of new products requires custom made raw materials that are initially single-sourced until the Company determines the need to develop new sources (Annual Report 2005 14). The manufacture of raw materials and the assembly of some of the Company’s products are made in several foreign countries by third party vendors. The Company’s marketing is done through the Company’s website, company-owned retail stores, direct selling by the Company’s sale force and third party wholesalers, resellers, and value added resellers. The Company’s main markets are usually in the following fields: education, business, creative and consumer market (Annual Report 2005 12). In 2005, the US education industry accounted for more than 12% of the Company’s net sales (12). The Company is not dependent on any single customer for its income. In fact, no single customer of the Company accounted for more than 10% of its sales for three succeeding fiscal years, 2003 to 2005 (12). The Company is divided into four reportable operating segments, America, Europe, Japan, and Retail. It also has an operating segment in Asia-Pacific. The three geographical segments mentioned above do not include retail. The Retail segment operates in the United States, Canada, United Kingdom, and Japan. (3) The Company intends to continue its substantial investment in research and development. The Company’s strategic plan includes the improvement of the Company’s existing products, as well as the development of new ones (7). The Company also believes in the capitalizing in the convergence of digital consumer products (7). This is in keeping with industry trend. For example, both the Company and Microsoft have patents that would improve or create wifi-sharing ability (wireless connectivity) in iPod, iPhone and Zune (Cheng para. 1). Zune, Micosoft’s digital music player, already has a wireless sharing capability which the iPod hopes to emulate. The new patent of the Company may also make it possible for the consumer to directly purchase media from a server through the iPod or iPhone (para 5 and 6). The Company also plans to continue to exploit the perceived advantages of the Company’s products. These advantages are â€Å"innovative industrial design, intuitive ease-of-use, and built-in networking, graphics and multimedia capabilities† (Annual Report 2005 2). Another shift in the Company’s product development is the shift to â€Å"a greener apple.† The Company announced its intention to continue to remove toxic waste from new products and aggressively recycle old products (Jobs). The Company claims that it is leading the industry’s efforts to create more environmentally responsible company and products. The Company plans to create more energy efficient products in the future (para. 29). The Company is not alone in this. Other companies also exerted efforts to show social and environmental awareness. Sometime in 2007, Google released a more energy saving â€Å"black screen† after a study showing that a blacks screen uses less electricity than a white one. As far as its marketing is concerned, the Company plans to expand the distribution of its products. In the past year it has focused on adding on to its direct selling capabilities and the improvement of its sales staff. The Company will continue this style by building more Company-owned store in high traffic locations (Annual Report 2005 8). It also aims to widen its consumer base by targeting first-time computer owners and those people who do not own a Macintosh computer (8). The Company also plans to continue building brand awareness by increasing investment in marketing and advertising as shown by the increase in selling expenses over the years. II. Business Risks Research and development is a major component of the Company’s business risk. It involve a significant amount of the Company’s resources, with research and development expenditures amounting to $534  million, $489  million, and $471  million in 2005, 2004, and 2003, respectively (Annual report 2005 13). The benefits are also contingent on several factors, including the ability of the Company to determine which products or innovations can be successfully developed, manufactured and marketed. There is always the risk of choosing the wrong innovation to focus resources on. The failure to produce marketable products regularly means loss of resources and market standing. Research and development also has a legal risks involved. The Company has admitted that because of the rapid change in technology and the pace by which new patents are being issued, â€Å"it is possible certain components of the Company’s products and business methods may unknowingly infringe existing patents of others† (15). Aside from suits relating to infringement of intellectual property rights, the Company is also facing various suits in relation to its products and a derivative suit filed by its shareholders involving unfair competition and false and misleading proxy statements. In 2006, the Company was placed under scrutiny due to stock option grants, some of which are issued to the Company’s CEO, Steve Jobs, in 1997 and 2001 (Iwata). There were allegations of stockholders that the grant was part of a â€Å"backdating scheme†, a scheme were it is made to appear that the options are transacted at a later date when the shares are valued lower (Apple comes under scrutiny). The investigation showed thousands of backdating grants including two made to CEO Jobs, the second of which did not observe the requirements for validity (Iwata). CEO Jobs was not held accountable for the irregularity of the grant. However, because of the irregularity in the stock options grants issued, the Company restated prior years’ financial statements. Because of these events, the Company admitted in its annual report (2006) the there is further risk of â€Å"litigation, regulatory proceedings and government enforcement actions† (21). The manufacturing of the Company’s products raises some special concerns. As stated above certain key components can only be obtained from a single or limited source (Annual Report 2005 13). Even key components that are not from a single or limited source are sometimes subject to â€Å"availability constraints and pricing pressures† (13). In facts, sometime in 2005 and 2004, the company already experienced delays in acquiring key components which led the Company to change one of the major components of one of its products. The Company admits that the loss of certain suppliers would have an adverse effect on the Company (14). Because of this, there is a risk that the Company will not be able to meet demands for the Company’s products or that the Company will incur delay in the delivery the products ordered by customers. The Company also relies on third parties to supply digital content in its iTunes stores and to develop certain software applications. The failure of third parties to supply digital content does not only affect the performance of iTune stores but also the dominant position of the Company’s digital music player. In the same manner, the failure of software developers to develop programs compatible with the Company’s computer platform due to bigger market for Windows and Linux will adversely affect the demand for the Company’s personal computers. The use of foreign third party vendors in the final assembly of the Company’s portable products and as suppliers of raw materials increases the Company’s risk of being adversely affected by political and economic conditions in these foreign countries. Political upheaval and economic crisis in foreign countries can affect suppliers’ ability to meet the Company’s demand. The Company faces cut throat competition on many of its products. In the advent of personal computers, the Company owns a significant chunk of the market. Over the years, the Company’s market share grew smaller and smaller. In July 2006, the Company’s market share is around 2.2% (Apple market share myth), a significant drop from its original market share. However, percentage figures do not account for the growth in the PC market since its birth in the 1980’s. The decline in the Company’s market share can also be attributed to the growth of numerous generic brands that are much cheaper than the Company’s Mac computers. The proliferation of â€Å"clones† led many companies to lower their prices and profit margin to gain a bigger market share. There is an on going price competition in the PC market, and the Company is striving to be competitive in this area. However, the Company’s business strategy seems to focus less in making cheaper PCs but more on developing products that appeal to its niche market, such as the creative market (Annual Report 2005 2). This strategy of the Company is a business risk because the limited market base makes it more vulnerable to economic factors. Decline in spending ability of one of its niche market can have a greater impact on the company than if it has diverse market. On the other hand, it removes the Company from the competition in market segments that are already saturated with other players. Some analysts believe that part of the upside of the Company’s strategy is that it has refused to compete in a market over which Microsoft already has a monopoly (Apple market share myth). Microsoft has acquired a monopoly in the industry by selling cheap PCs with expensive software or a system called â€Å"exclusive software bundling.† This makes it difficult for other companies to develop operating system’s that are competitive with Microsoft’s. The Company’s strategy in focusing on the improvement of what the consumers perceived as the functional and design advantages of the Macintosh platform opens the Company to the risk mentioned above but it also removes it from the competing in saturated markets. The digital music player market is expected to grow up to 286 million units in 2010 (Guza para.1). The Company’s own product, iPod, continues to dominate the market; however, many competitors are cropping up, challenging the Company’s dominant position. Analyst believes that the Company should not be complacent regarding its dominant position in the business since the digital music player market is young and has only penetrated a small portion of the market in the United States (Siklos). Although many competitors have tried to challenge the Companies and failed, the competition is not giving up. Competitor, Microsoft, came up with Zune, its own brand of digital music player that is compatible with Microsoft’s own on-line music store. Samsung, Sandisk and Creative have came out with products of their own. Software, hardware and on-line companies are working together to address technical difficulties in the initial launch of their own digital music players, and imp roving their services (Wingfield para. 4). There is a risk that the Company’s music related products may follow the road of its personal computers. III. Three Most Significant Financial Statement Accounts The three most significant financial statement accounts for the Company are research and development, inventory, and common stock. Research and development is significant because the Company is engaged in the production and marketing of technology. Not only is research and development expense significantly higher compared to other industry, it is also the cost which enables the Company to continue its existence. In the industry where the Company belongs, obsolescence happens very fast. If the Company fails to innovate, there will come a time that the Company itself will be obsolete since the consumers have switched to the more recently developed products. Many of the Company’s strategic plans are tied up with research and development, such as the plans to improve existing products and the move towards convergence of digital products. The plans of the company to improve and to add innovations to existing products will involve a significant amount of the Company’s resources. The amount of the company’s resources spent in research and development are expensed outright, except for the costs which are incurred after the innovation has been determined to be technologically feasible (Annual Report 2005 68). The failure of the Company to produce technologically feasible products may increase research and development expense, in the same manner that the success of developing technologically feasible products does not necessarily decrease research and development expense. If all the cost for development of the product was incurred before it was determined to be technologically feasible, all cost are expensed outright regardless of feasibility. Based on the Company’s financial statements, capitalization of research and development expense is minimal (77). Inventory is significant for the Company since its operations involved both manufacturing and retail. The Company’s inventory is subject to several business risks already discussed above. In relation to the supplies issue, the Company entered into long-term supply agreements with several companies which bound the Company to these suppliers until 2010. As part of the agreement, the Company is required to make prepayment amounting to $1.25 billion in the second quarter of 2006. (Annual Report 2005 91) Part of the Company’s objectives is to ensure a continuing and timely flow of competitive products and technology to the marketplace. The achievement of this objectives means that the Company’s inventory levels are always sufficient to meet demands for the Company’s products. This would also mean that the Company has successfully managed it inventory during the year. Proper management of inventory would result in a year end inventory level is not too high or to low. The Company’s common stock is significant for the year 2006 because of the discovered irregularities in the issuance of stock option grants issued in 1997 and 2001. These resulted in allegations of fraud and falsification of documents (Wearden para.4). The Company has already investigated the matter, and the result of such investigation has exonerated CEO Steve Jobs of any misconduct. However, restatements of prior years’ financial statements were made, including the common stock and other related accounts (para. 3). This account is not necessarily affected by the Company’s strategic objectives. The stock option grant issue itself affected the performance of the Company’s stock in the market and even raised the issue of possibly delisting from NASDAQ, but which turned out be without bases. IV. Management Assertions The management assertions relevant to research and development expense are completeness, accuracy, cut-off and classification. Completeness is a relevant management assertion because research and development is an expense account, and so, there is a risk that the Company will not include all research and development cost incurred in order to increase the net income for the year. Accuracy is relevant because there is a risk that transactions relating to this account are not recorded properly, resulting in under or over statement of the expense account and, in effect, of net income for the fiscal year. Cut-off is relevant for research and development so that there is proper matching of the expense with the revenue earned during the fiscal year. Failure to record expense in the correct accounting period can also result to over or under statement of the net income for the year. Classification is also a relevant for research and development because there is a risk that the Company will capitalize research and development improperly resulting in the over statement of net income for the year and inflating the Company’s asset even if there are no expected future benefits. Failure to record the amount in the proper account can also mean that there is no matching of income and expense. The management assertions relevant to inventory are existence, valuation and rights. Existence is a relevant management assertion because there is a risk that the Company will record assets that are not there in order to make the financial conditions of the Company look better to investors. The recording of assets that do not exist can also mean failure to record expenses which, in effect, results to overstatement of net income. Valuation is also a relevant because there is a risk that the Company may overstate the value of the asset to improve the financial statement of the Company. In either management assertions, there is a risk of management inflating the asset of the Company usually to improve the stockholders’ equity of the Company. Management assertions as to rights over inventory is also relevant because there is a risk that the Company included in its assets, inventories whose ownership has already passed to another, to improve the financial statements of the Company. The management assertions relevant to common stock are existence and valuation. Existence is a relevant management assertion because there is a risk that the Company records stocks which are not actually subscribed and issued or issues stock for which no consideration was actually received by the Company, also called watered stocks. Valuation is also relevant because there is a risk that the Company will over value the property received in consideration for the stocks issued, particularly if the stock is issued for consideration other than cash, making it appear that the Company is better off than it actually is. Both management assertions can be used by the Company to lure investors to invest in the Company under false pretenses. Although wrong management assertions can be a result of other causes that are not deliberate on the part of management, such as mistakes. The assertions mentioned above are relevant to those accounts because there is the additional risk of deliberate misstatement on the part of management. V. Environmental Risks There is a low inherent, control and detection risk in management assertions of completeness and accuracy of the research and development expense based on the Company’s conservative approach in recording research and development, as well as, the relative simplicity of identifying and recording research and development expense. On the other hand, the management assertion relating to the cut-off of research and development expense is assessed as having high inherent, control and detection risk because of the lack of sufficient data regarding the Companies processes and controls relating to this account. Because the risks mentioned above are assessed at maximum, more substantial test shall be performed to decrease audit risk. There is a high inherent risk in the classification of research and development expense because of the difficulty of determining technological feasibility. The determination of Technological feasibility can be extremely subjective. On the other hand, there is low control and detection risk in the classification of research and development expense because based on the Company’s past practices, the Company is very conservative in capitalizing research and development expenses. The percentage of research and development expense capitalized by the Company is very small compared to the research and development expense incurred every year. It is the Company’s policy to record all development cost incurred before determination of technological feasibility as expense, and the determination of technological feasibility is usually done after a large portion of the cost of development has been incurred so that only a small portion of the cost is actually capitalized and amortized. The inherent, control and detection risk is high for all assertions related to inventory because the operations of the Company is complex and international. The final assemblies of some of the Company’s products which are performed by the Company itself are in different locations outside the United States. There are also final assemblies of the Company products that are performed by third parties in different countries in Asia. The Company also takes advantage of several ways of marketing its products. It uses company-owned stores, direct selling, third party sellers and on-line selling. These make it extremely difficult to keep track of the movement of the inventory and to determine when ownership over the inventory change hands. The inherent risk is assessed as high for the management assertion of existence and valuation of common stock. This is because of the investigation which the Company itself initiated in relation to its stock options grant. The investigation caused the Company to adjust its income from prior years amounting to $84 million. The Company also has stock-based compensation plans consisting of stock options grants and stock purchase plans (Annual Report 2005 88) which calls for complicated computations. The control and detection risk is assessed as low for the management assertion of existence and valuation of common stock because of the Company’s efforts to investigate the matter as soon as the problem arose. It was the Company itself that announced the existence of irregularities in the issuance of its stock options grant. The Company has put in placed control mechanisms to address the matter. Moreover, records of the investigation conducted can help the auditor minimize detection risk.

Tuesday, July 30, 2019

Conceptualizing a Business Essay

Strategic planning for the purpose of developing a business is vital. In my opinion, a strong vision, mission, and values make up the foundation that is required to build a successful business. This paper will introduce the business selected in week two and will explain the importance of the selected business’s vision, mission, and values as they correspond in determining a strategic direction. The created vision for this organization will clearly demonstrate the future plans for the business. The guiding principles or values for the selected business considering the topics of culture, social responsibility, and ethics will be defined. Next, an analytical overview of how the vision, mission, and values guide the organization’s strategic direction will be discussed. Finally, an evaluation of how the organization addresses customer needs and critiques how the business achieves competitive advantage will be performed. When selecting a business these planning processes are important and will help define what direction the business is going in for success. The first objective in strategically planning a business is to have a vision. As stated by BusinessDictionary. om, a Vision Statement is: â€Å"An aspirational description of what an organization would like to achieve or accomplish in the mid-term or long-term future. It is intended to serves as a clear guide for choosing current and future courses of action. † The vision for the company is to be like Wal-Mart, a one-stop shop. I envision the hair salon to become a unisex salon, spa, and barber shop. The vision is for a person to come in and get his or her hair, nails and skin care, while having the option to purchase professional hair and beauty products at a fair and reasonable price. The vision is to incorporate a boutique where not only can people get pampered but can also buy a nice outfit to complete their look. The motto is, â€Å"We keep you neat from your head to your feet,† and therefore incorporating a boutique will confirm the motto of the business. The vision is to incorporate services that will stand out only in said selected salon and to provide professional caregivers and products to make customers feel as if they are on top of the world relaxing in a cloud of comfort ability. The business selected is a professional hair salon. When considering starting a business, one should ask themselves, â€Å"What is the mission for my business? † The mission of a company is the unique purpose that sets it apart from other companies of its type and identifies the scope of its operations (Pearce, & Robinson, 2009). The mission for the selected hair salon is to supply products and services to customers with exceptional customer service. To create a pleasurable environment with high-level professionals, for desired hair and body care results. Our Motto is, â€Å"We keep you neat from your head to your feet. The chosen business strives to use high quality products with passion and courtesy to all clients. The name for the hair salon will be, â€Å"The Malveaux Hair Experience. † The Malveaux Hair Experience needs guiding principles or values considering the topics of culture, social responsibility, and ethics. It is the responsibility of the salon owner to ensure that all personnel are adequately trained, licensed, and understands each service offered (Fulbright, 2004). It is the salon owner responsibility to be aware of the liabilities of the salon, licensed personnel, clean environment, and clean equipment. The social responsibility of a salon is pondering hair trend, marketing, and clients. The salon should be run with individual morals and values as well as the values of the business. In a salon setting, the professionals must have respect for one another. There has to be a strong trust factor with each other personal items and salon products. The salon has personnel has to consist of a group of team players. Although stylist has their own style of artistry, they must all be on the same professional level. Each employee of the salon must portray positive attitudes and leadership skills. Customers will be greeted with a friendly smile and treated fair and with respect. It is the responsibility of the salon’s team members to create an environment that treats people the way he or she expects to be treated and not subject the business to anything short of this type of behavior. The vision, mission, and values guide the organizations strategic direction. The vision, mission, and values of the company help to forecast the business’s success. As long as the business is following the strategic plan by which the foundation and integrity of the company is built, customers will continue to come and receive services. People will spread the word of good service he or she received while visiting The Malveaux Hair Experience. Good values will help the business grow in areas the business could not imagine. If the employees and manager of the business follow the strategic plan of the company, the company will be successful and profitable. The vision, mission, and values will help all employees involved and will allow the team to be on the same accord. This is a perfect example of how to keep team communication consistent and giving excellent customer service to clients. When employees work toward one common goal, the organization is channeled in the right direction. The mission helps to generate possible and desired opportunities. The organization needs to evaluate how the organization addresses customer needs and critique how he or she achieves competitive advantage. The salon has to evaluate the services and products provided to the customers. Evaluate whether or not the needs are met in accordance to the ethics of the business. The business has to evaluate whether or not it is providing services that other salons are not offering. The business has to evaluate the competition it has and make sure its business providing the best customer service. The business can evaluate repeat customers and if the needs are being met. If a customer tells his or her family and friends about the services or services The Malveaux Hair Experience is providing, the word of mouth referrals will be a good evaluation of how the business is doing. This will also provide a way to analyze whether or not the company is meeting the needs of the customers and meeting or exceeding the competitive advantage. This paper explained the importance of a business’s vision, mission, values, and has determined the strategic direction. When a strategic plan is in place, this helps the business determine what needs to be the main focus. Planning helps the business show leadership and direction. The business has to have some direction to achieve goals set for the business. Working with a team of people who focus on the same goal will allow much success for businesses. Strategic management is the set of decision and actions that result in the formulation and implementation of plans designed to achieve a company’s objectives (Pearce, & Robinson, 2009).

Psychology and Association Test Essay

Experimental psychology is an area of psychology that utilizes scientific methods to research the mind and behavior. While students are often required to take experimental psychology courses during undergraduate and graduate school, you should really think of this subject as a methodology rather than a singular area within psychology. Many of these techniques are also used by other subfields of psychology to conduct research on everything from childhood development to social issues. Experimental psychologists work in a wide variety of settings including colleges, universities, research centers, government and private businesses. Some of these professionals may focus on teaching experimental to students, while others conduct research on cognitive processes, animal behavior, neuroscience, personality and many other subject areas. Those who work in academic settings often teach psychology courses in addition to performing research and publishing their findings in professional journals. Other experimental psychologists work with businesses to discover ways to make employees more productive or to create a safer workplace, a specialty area known as human factors psychology. Do you enjoy researching human behavior? If you have a passion for solving problems or exploring theoretical questions, you might also have a strong interest in a career as an experimental psychologist. Experimental psychologists study a huge range of topics within psychology, including both human and animal behavior. If you’ve ever wanted to learn more about what experimental psychologists do, this career profile can answers some of your basic questions and help you decide if you want to explore this specialty area in greater depth. An experimental psychologist is a type of psychologist who uses scientific methods to collect data and perform research. Experimental psychologists explore an immense range of psychological phenomena, ranging from learning to personality to cognitive processes. The exact type of research an experimental psychologist performs may depend on a number of factors including his or her educational background, interests and area of employment. According to the Bureau of Labor Statistics:â€Å"Experimental or research psychologists work in university and private research centers and in business, nonprofit, and governmental organizations. They study the behavior of both human beings and animals, such as rats, monkeys, and pigeons. Prominent areas of study in experimental research include motivation, thought, attention, learning and memory, sensory and perceptual processes, effects of substance abuse, and genetic and neurological factors affecting behavior. † Experimental psychologists work in a wide variety of settings including colleges, universities, research centers, government and private businesses. Some of these professionals may focus on teaching experimental methods to students, while others conduct research on cognitive processes, animal behavior, neuroscience, personality and many other subject areas. Those who work in academic settings often teach psychology courses in addition to performing research and publishing their findings in professional journals. Other experimental psychologists may work with businesses to discover ways to make employees more productive or to create a safer workplace, a specialty area known as human factors psychology. Experimental psychology is an approach to psychology that treats it as one of the natural sciences, and therefore assumes that it is susceptible to the experimental method. Many experimental psychologists have gone further, and have assumed that all methods of investigation other than experimentation are suspect. In particular, experimental psychologists have been inclined to discount the case study and interview methods as they have been used in clinical and developmental psychology. Since it is a methodological rather than a substantive category, experimental psychology embraces a disparate collection of areas of study. It is usually taken to include the study of perception, cognitive psychology, comparative psychology, the experimental analysis of behavior, and some aspects of physiological psychology. Wilhelm Wundt (1832-1920) was a German physician, psychologist, physiologist and philosopher, known today as the â€Å"Father of Experimental Psychology† Some Famous Experimental Psychologists: Wilhelm Wundt later wrote the Principles of Physiological Psychology (1874), which helped establish experimental procedures in psychological research. After taking a position at the University of Liepzig, Wundt founded the first of only two experimental psychology labs in existence at that time. (Although a third lab already existed – William James established a lab at Harvard, which was focused on offering teaching demonstrations rather than experimentation. G. Stanley Hall founded the first American experimental psychology lab at John Hopkins University). Wundt was associated with the theoretical perspective known as structuralism, which involves describing the structures that compose the mind. He believed that psychology was the science of conscious experience and that trained observers could accurately describe thoughts, feelings, and emotions through a process known as introspection. Psychologist Hermann Ebbinghaus was one of the first to scientifically study forgetting. In experiments where is used himself as the subject, Ebbinghaus tested his memory using three-letter nonsense syllables. He relied on such nonsense words because relying on previously known words would have made use of his existing knowledge and associations in his memory. In order to test for new information, Ebbinghaus tested his memory for periods of time ranging from 20 minutes to 31 days. He then published his findings in 1885 in Memory: A Contribution to Experimental Psychology. His results, plotted in what is known as the Ebbinghaus forgetting curve, revealed a relationship between forgetting and time. Initially, information is often lost very quickly after it is learned. Factors such as how the information was learned and how frequently it was rehearsed play a role in how quickly these memories are lost. The forgetting curve also showed that forgetting does not continue to decline until all of the information is lost. At a certain point, the amount of forgetting levels off. What exactly does this mean? It indicates that information stored in long-term memory is surprisingly stable. In the realm of mental phenomena, experiment and measurement have hitherto been chiefly limited in application to sense perception and to the time relations of mental processes. By means of the following investigations we have tried to go a step farther into the workings of the mind and to submit to an experimental and quantitative treatment the manifestations of memory. The term, memory, is to be taken here in its broadest sense, including Learning, Retention, Association and Reproduction. The principal objections which, as a matter of course, rise against the possibility of such a treatment are discussed in detail in the text and in part have been made objects of investigations. I may therefore ask those who are not already convinced a priori of the impossibility of such an attempt to postpone their decision about its practicability. Gustav Fechner did not call himself a psychologist, some important historians of psychology like Edwin G. Boring consider the experimental rising of this science in Fechner’s work (1979, p. 297). More specifically, it was Fechner’s famous intuition of October 22, 1850 that, according to Boring (quoted by Saul Rosenzweig, 1987), gave opportunity to his work as a psychophysicist (Rosenzweig also remembers that this date that serves as reference to this event, is curiously close to Boring? birthday, October 23rd). In a more concise way, if we think Fechner’s psychophysics work as the junction of a philosophical doctrine (that correlates spirit and matter as aspects of the same being), an experimental methodology (correlating the variations of stimulus and sensations perceived) and an assemblage of mathematical laws (the famous Weber-Fechner law); in addition, the last two aspects are considered especially relevant to the rising of psychology. Nevertheless, to think that the rising of a science is restricted to the establishment of experimental procedure and to a mathematical formalization, is to forget a whole field of questioning in which the instruments created by Fechner could, in the middle of the 19thcentury, overcome some obstacles and answer some questions, notably the ones made by the critic philosophy of Immanuel Kant. Ernst Weber was a German physiologist and Psychologist. He was regarded as a predecessor of experimental psychology and one of the founders of Psychophysics, the branch of psychology that studies the relations between physical stimuli and mental states. He is known chiefly for his work on investigation of subjective sensory response (sensations) to the impact of external physical stimuli: weight, temperature, and pressure. Weber experimentally determined the accuracy of tactile sensations, namely, the distance between two points on the skin, in which a person can perceive two separate touches. He discovered the two-point threshold – the distance on the skin separating two pointed stimulators that is required to experience two rather than one point of stimulation.

Monday, July 29, 2019

Media Sensationalism Research Paper Example | Topics and Well Written Essays - 2500 words

Media Sensationalism - Research Paper Example It is a hallmark of playing with the public emotions in order to create a picture that is intended by government agents or terror groups. Some of the tactics normally employed include editorial biases, exaggeration, deliberate obtuse information that is intended to play with public emotion. Besides, it encompasses magnifying trivial information in order to appear loud and sensible2. Media sensationalism appears to have thrived best during the American September 11, 2001 terror attack and subsequent events. Notably, as the Bush administration strived to keep bold face after terror act, the reality of the eventually created public discontent on the role of security agents and the government in safeguarding its people. It was a devastating event that left 3031 people dead and many maimed. The paper explores how media sensationalism has since evolved as a terrorist tool and as a counter-terrorism weapon. Without communication, there would be no terrorism. Though it existed prior to mass media, terror groups have begun using mass media as a tool to discredit perceived success on the war on terror. Most notably, the Islamist extremist groups such as ISIS have used mass media as a war tool. Dial H-I-S-T-O-R-Y video clip by Johan Grimpez in 1997 reflects a medium and the terrorist conflict way back before the September 11, U.S terror attack. The video offers a historical chart of airplane hijacking and how progressive television coverage became more and more deadly. It centers on an imagined conversation between a terrorist and a novelist. As the video progresses, media coverage increases leading to inward society shock on the reality of plane hijack by the terrorist. The media coverage of incidence serves to create societal tension. Mouna Abdel-Majid, a member of PLO, tells the reporter that westerners have fought beyond their territory, and they were now avenging3. Notably, they engage in exchange

Sunday, July 28, 2019

Brand Management Strategy Essay Example | Topics and Well Written Essays - 2750 words

Brand Management Strategy - Essay Example Thus, a product may be a physical good (e.g., a cereal, tennis racquet, or automobile), service (e.g., an airline, bank, or insurance company), retail store (e.g., a department store, specialty store, or supermarket), person (e.g., a political figure, entertainer, or professional athlete), organization (e.g., a nonprofit organization, trade organization, or arts group), place (e.g., a city, state, or country), or idea (e.g., a political or social cause). Brands play a critical role in a firm's international marketing strategy. Coherent international brand architecture is a key component of the firm's overall international marketing strategy as it provides a framework to leverage strong brands into other markets, assimilate acquired brands, and rationalize the firm's international branding strategy. This paper aims at making a detailed analysis of the product portfolio of Coca Cola and determines the effectiveness of its brand strategies. Most discussion and research on branding, both in domestic or international markets focus on the equity or value associated with a brand name and the factors that create or are the underlying source of value (Aaker, 1996; Kapferer, 1997; Keller, 1998). Considerable attention has, for example, been devoted to examining about extending the value embodied in a brand and its equity to other products without resulting in dilution of value (Aaker and Keller, 1990). This interest has been stimulated in part by the increasing market power and value associated with a strong brand and in part by the prohibitive costs of launching a successful new brand. In international markets, interest has been centered on global branding - defining the meaning of a global brand, discussing the advantages and pitfalls, and the conditions under which building a global brand is most likely to be successful (Roth, 1995a, b; Quelch, 1999). While this focus is appropriate for a relatively few high profile brands such as Coca-Cola, it ignores the complexity of the issues faced by the vast majority of multinational firms who own a variety of national, regional and international brands, at different levels in the organization, spanning a broad range of diverse country markets. Typically, these brands differ in their strength, associations, target market and the range of products covered, both within and across markets. Equally the use of brands at different organizational levels may vary from company to company. Research of Brand Portfolio Coca-Cola is the manufacturer, distributor and marketer of nonalcoholic beverage concentrates and syrups across the globe. They also manufacture, distribute and market some finished beverages. Along with Coca-Cola, which is recognized as the world's most valuable brand, they market four of the world's top five soft drink brands, including Diet Coke, Fanta and Sprite. The Company owns or licenses more than 400 brands, including carbonated soft drinks, juice and juice drinks, sports drinks, water products, teas, coffees and other beverages to meet consumers' desires, needs and lifestyle choices. More than 1.3 billion servings

Saturday, July 27, 2019

Graph theory Research Paper Example | Topics and Well Written Essays - 500 words

Graph theory - Research Paper Example iss Mathematician developed a solution to an old puzzle related to the possibility of establishing a path across every one of the seven bridges that span a forked river flowing past (Biggs 140). From a conceptual perspective, a graph is formed by vertices and edges linking the vertices. From a formal perspective however, a graph refers to a pair of sets (V, E), where V is a set of vertices and E is a set of edges. Based on these fundamental concepts underpinning graph theory, this paper seeks to explain the importance and application of the theory’s theoretical concepts in various fields (Biggs 124). The concept of graph theory is important because graphs allow for a simplification of complex concepts, eliminating the irrelevant details without forfeiting much information necessary for the task. As asserted by Biggs (148), the assumptions made by the graph theory match the real world conditions and are therefore not comparable to any other model. Among the fundamental uses of the graph theory entail; giving a unified formalism for diverse looking real life problems. This has been the sufficient basis upon which algorithms have been presented in this common formalism. The theoretical concepts underpinning graph theory are widely used in the studying and modeling of various applications, in diverse fields. These include; construction of bonds in chemistry, study of molecules, and the study of atoms. Graph theory is widely used in Sociology for instance to measure the prestige of actors or in exploring the mechanisms of diffusion (Biggs 150). Besides, the conservation efforts made in biological sciences essentially apply the concepts of graph theory where a vertex represent regions where certain species exist and the edges are used to represent paths of migration or movement from one region to another (Biggs 152). Such information is important more so when examining the breeding patterns or tracking the spread of parasites, diseases and in the study of the

Friday, July 26, 2019

Managing stress among employees in an outpatient setting Research Paper

Managing stress among employees in an outpatient setting - Research Paper Example At the same time, the effectiveness of the changes being implemented need to be determined at regular intervals. It is identified that the negligence of evaluation phases causes the breakdown of implemented changes. This paper will discuss various strategies and techniques that can be effectively employed in evaluating the impacts of the introduced changes among employees in out-patient settings. There are three evaluation phases that are scientifically designed for program evaluation; formative evaluation, summative evaluation, and impact evaluation. The formative evaluation phase continuously acquires information regarding the introduced program in order to amplify the performance. According to Lytras, Carroll, Damiani, Tennyson, Avison, Vossen, and Pablos (2008), in summative evaluation phase, the outcomes of the project are assessed; and from those results, the project managers analyze the impact of the outcome on its actual beneficiaries; the shareholders (p.672). On the other h and, impact evaluation phase focuses on the larger group of beneficiaries over a long period of time. Here we can use formative evaluation and summative evaluation techniques for the determination of the effectiveness of the introduced organizational change in an out-patient setting. ... Similarly, it is suggestible for the project management team to assess the effectiveness of the change by considering its impacts on those employees who were affected by stress. It can be achieved by comparing the individuals’ level of performance prior to and after the implementation of the program. Their new ways of working behavior both as individuals and as team need to be evaluated for knowing whether the change takes a positive effect on them or not. This process of change management is termed as change curve which can be used for assessing the impacts of the implemented measures (Change management: Making organization change happen effectively, n.d.). It is important to evaluate the extent to which the measures being implemented are accepted by the workers and whether the measures are effective in helping employees to bring out their potential completely toward the work undertaken. It would be better to assess the rate of absenteeism before and after the execution of th e strategies. The record of employees’ medical leave would reflect the effectiveness of stress management measures, because stress- free life offers physical as well as mental stability. At the same time, the finest way is to judge how effectively employees involve in work and how well this is being reflected in their performance. It is better to evaluate the workers’ contribution towards the development of the organization as a whole. An efficient supervisor would assist the management to evaluate the individual performance frequently. The status of the organization also reflects the impacts of organizational change. To illustrate, it is essential to analyze the impact of the program on the economic interests of the shareholders. In addition, the progress of the organization and its stature

Thursday, July 25, 2019

Eugene Smith Essay Example | Topics and Well Written Essays - 1000 words

Eugene Smith - Essay Example He began taking photographs in 1932 and early subjects included sports, aviation and the Dust Bowl. After studying at Notre Dame University for a year he joined the staff of Newsweek. In 1938 Smith became a freelance photographer working for Life Magazine, Collier's Weekly and the New York Times. In 1942 Smith became a war correspondent and spent most of the next three years covering the Pacific War. His most dramatic photographs were taken during the invasion of Okinawa in April 1945. On 23rd May Smith was seriously wounded by a Japanese shell fragment. He was taking a photograph at the time and the metal passed through his left hand before hitting the face. Smith was forced to return to the United States and he had to endure two years of hospitalization and plastic surgery. In 1947 Smith joined Life Magazine and over the next seven years produced a series of photo-essays that established him as the world's most important photojournalist. These included essays entitled: Country Doct or, Hard Times on Broadway, Spanish Village, Southern Midwife and Man of Mercy. Granted a Guggenheim Fellowship (1956-57), Smith began a massive picture essay of Pittsburgh. Smith's last great photo-essay, Minamata (1975), deals with the residents of a Japanese fishing village who suffered poisoning and gross disfigurement from the mercury wastes of a nearby chemical company. While photographing this project he was severely beaten by several local factory workers who were opposed to the revelations that his camera exposed. An extensive collection of his work was acquired by the Center for Creative Photography at the University of Arizona in 1976. Smith severed his ties with Life again over the way in which the magazine used his photos of Albert Schweitzer. Starting from his project to document Pittsburgh, he began a series of book-length photo essays in which he strove for complete control of his subject matter. This was followed by another large project on New York (1958-59). Smith also taught photojournalism at New York's New School for Social Research and was president of the American Society of Magazine Photographers. Complications from his consumption of drugs and alcohol led to a massive stroke, from which Smith died in 1978. Today, Smith's legacy lives on through the W. Eugene Smith Fund to promote "humanistic photography," which has since 1980 awarded photographers for exceptional accomplishments in the field. Of him, he says: "I am an idealist. I often feel I would like to be an artist in an ivory tower. Yet it is imperative that I speak to people, so I must desert that ivory tower. To do this, I am a journalist-a photojournalist. But I am always torn between the attitude of the journalist, who is a recorder of facts, and the artist, who is often necessarily at odds with the facts. My principle concern is for honesty, above all honesty with myself..." His Works and Analysis: "A Walk to Paradise Garden", 1946 Smith's war wounds cost him two painful years of hospitalization and plastic surgery. During these years he took no pictures and whether he would ever be able to return to photography was doubtful. Then one day, during his period of convalescence, Smith took a walk with his two children and even though it was still intensely painful for him to operate a camera, came back with one of the

Wednesday, July 24, 2019

Do Structure Matter Research Paper Example | Topics and Well Written Essays - 750 words

Do Structure Matter - Research Paper Example In the New Yorker magazine, the designer shared the readers’ position to understand the much-needed topical matters. Additionally, the New Yorker designers attain customers’ attention by using letters and words that enhance connection, employing the use of challenging language and unbelievable revelations and statements that attract readers’ attention. The designer here therefore steps into the shoes of New Yorker Magazine readers to enable bring out their specific needs that would attract and retain their attention to the content of the magazine. The large bold typeface used help communicate the significance of a heading therefore improving readers attention. The different segments running at the top parts communicate the relevance of each topic covered from books, fiction, daily comments among other headings in bold typeface. The magazine avoids excess use of decorative typeface. However much these have a potential of attracting customers, they may make the magazine hard to read through. This would then push readers to other easier, words, and letterforms in the magazine. This little use of decoration improves readers’ attention and retains their concentration to the contents they read (Jessica & Carolyn, 2007). New Yorker magazine achieve tone and texture in its design through integrating lines of type, words and letterforms. Additionally, it makes good use of weight, line spacing, letter spacing and typeface to attain readers’ attention to the content of the magazine. These design quality enhances brightness and density of type that moderates tone and texture in the magazine. Tone and textures determine an order in which readers go through a text. Putting the main topic in bold and coloring of parts of the text makes them more attractive and appealing to readers irrespective of their relevance (Knight & Glaser, 2013). The magazine uses multiple fonts that prove

U.S. Airline Industry Regulation Essay Example | Topics and Well Written Essays - 500 words

U.S. Airline Industry Regulation - Essay Example Since the deregulation, the air transport carriers have decreased in number. A number of problems have plagued the industry and its workers. Whereas there are many favorable conditions that prevailed in the air line industry. The term airline deregulation has been in the news for decades. The airline deregulation was signed into law by President Jimmy Carter, he removed the power of the Civil Aeronautics Board to allocate routes and set fares. Before this law the fares were the same regardless of the airline flown. Due to the airline deregulation the airline fares have varied. But from a consumer’s point of view, the deregulation proved to be consumer friendly as fares are much less expensive than it were before. (Buckfelder) When talking about regulation, it has played a pervasive role. It subsidized increasing return and it had economy-wide positive spill over. Regulation provided an institutional structure that gave way to investment and uncertain technologies, which would otherwise exposed to inconsistencies and market breakdown. Regulation created high concentration industry, major trunk airlines with high market shares justifying high-fixed cost and new aircraft technologies. (Yosef, 2005, p No 133) Air travel has increased drastically and due to deregulation the airlines have improved their services, equipment and made it accessible to the general public. The rigid fares of the regulatory era have given way today’s competitive price market. It introduced competition in the industry and airline fares. It proved as a successful step for commercial airlines, the airlines could now set their own policies; determine the fares without any government intervention and expect high level of profits. Although the industry gained a lot from deregulation but few draw backs also flourished. The industry had unionized workforce that flourished on inefficiency and generous salaries, it proved to be a problem for a competitive

Tuesday, July 23, 2019

Major Events during World War II Essay Example | Topics and Well Written Essays - 1250 words

Major Events during World War II - Essay Example Rising to power by Adolf Hitler in the year 1933 reestablished the German army and prepared it well to participate in a war of invasion. Events took place during the World War II. Prior to the real war, growing of tension took about three years, which include the union of Germany and Austria, incursion of Czechoslovakia and Spanish Civil war. The result was the invasion of Poland by German army; war on Germany was declared just two days after the German attack on Poland by Britain and France. United States played a role in supplying Britain with war weapons but they did not take part in it (Alleman). Tripartite Pact Tripartite Pact was created and signed between Germany, Italy and Japan in 1940 with an aim of fostering peace among themselves and the entire world. The threes governments believed that in fostering peace, they would maintain a systematic way of doing things, promote partnership and prosperity among its people. In addition, it is the mandate of the three countries to col laborate with other countries in the globe. These countries in agreement, both recognize and respect the leadership of each country so that they could create new order of things in Europe and Middle Asia (Robinson). They also confirmed that their agreement had no impact on the political status that existed then between the constricting authorities and soviet Russia. The three countries agreed that the validity of the pact was to take effect immediately it was signed and was to last for ten years from the day of signing. Renewal of the pact would depend on negotiations between any of the countries and high constricting authorities. The Lend-Lease Act Lend-Lease Act was a United States Federal Government plan during World War II which was passed on March 11 1941. President Roosevelt authorized the act. United States provided this service with the aim of defending its country from external attacks as well as for economic benefits. This act provided United States with the ability to sup ply war materials and other resources to associated nations while it acted on neutral grounds (Kellogg p.330). The act was based on cash and carry grounds as a result of mutual understanding as a result of weak economic power of England which was unable to purchase and provide transport means for the materials which came to an end in 1945. Attack of Pearl Harbor Japanese forces attacked Pearl Harbor on the 7th day of December 1941which was the base of about 50000 American military; highest concentration of United States army was. Japanese carriers and ships that escorted the carriers positioned themselves some miles away from the target area and instigated its first group of fighters, bombers, and war planes. Their main aim was to destroy the U.S fleets; the battle ships, carriers and the aircraft (Gropman p.11). Many U.S soldiers were killed, the Pacific fleet of the U.S was damaged although temporarily, which was a success to the Japanese. It was out of this that United States of America affirmed war on Japan (Robinson). Battle of the Coral Sea In May 1942, battle of the Coral Sea was experienced. It was battled in the waters and was the first among the six pacific wars, which was between conflicting aircraft transporter military. The battle was as a result of the Japanese

Monday, July 22, 2019

Text Messaging Essay Example for Free

Text Messaging Essay Texting has become an integral part of our lives; it has developed very rapidly throughout the world. Initial growth of text messaging starts with customers in 1995 sending an average 0.4 messages per GSM (Global System for mobile communications) per customer per month. (Wikipedia, 2009) Today, text messaging is the most widely used mobile data service, with 35% of all mobile phones users worldwide or 4.2 million to 7.3 million phone subscribers at the end of 2003 being active users of SMS. The largest average usage of the service by mobile phone subscribers is the Philippines with an average of 15 texts per day by subscribers. (Wikipedia, 2009) Text messaging is most often used between private mobile users as a substitute for voice calls situations. Popularity has grown to a sufficient extent that the term texting has enticed the people. It is a very powerful tool in the Philippines where the average user sends 10-12 text messages a day. The Philippines ends on the average 400 million test messages per day or approximately 142 billion text messages sent a year. At the end of 2007, four of the top mobile service providers in the country stated that there are 42.78 million mobile subscribers in the Philippines; thus Philippines has become the â€Å"texting capital of the world†. The expanding availability of text messaging has raised questions about the effect of texting on standard literacy. Many have reported unintentional intrusions of abbreviations used in texting called â€Å"textisms†- is inappropriate contexts. (Wood. Et al., 2009) This study aims to determine whether the texting habits of the first year high school students of Aldersgate Science High School should be a concern as it significantly demeaning th eir spelling proficiency. Statement of the Problem This study aims to determine the correlation between the Spelling Proficiencies of Texters and Non – texters of selected students of Aldersgate College Science High School. It also aims to answer the following questions: Respondents of the study, the First Year High School students of AC Science High School 1. What is the profile of the selected students of the AC SHS as to: 1.1 gender 1.2 age 1.3 score in the spelling proficiency test 1.4 monthly income of the family 1.5 text promo availed of 1.6 length of ownership of the cell phone 1.7 amount spent in texting 1.8 frequency of texting 1.9 type of text message sent 1.10 person sent text messages to 2. Is there a significant difference between the spelling proficiency of texters and non – texters. 3. Is there a significant relationship between the spelling proficiency of texters when group according to the following variables: 1.1 gender 1.2 age 1.3 score in the spelling proficiency test 1.4 monthly income of the family 1.5 text promo availed of 1.6 length of ownership of the cell phone 1.7 amount spent in texting 1.8 frequency of texting 1.9 type of text message sent 1.10 person sent text messages to Statement of Hypothesis Null Hypothesis There is no skeptical effect of texting to the spelling proficiency of the first year high school students of Aldersgate College. Alternative Hypothesis There is a negative effect of texting to the spelling proficiency of the first year high school students of Aldersgate College. Scope and Delimitation of the Study The study is confined to determine if there is a negative effect of texting to the spelling proficiency of First year High School students of Aldersgate College through a series of survey conducted, during the first semester, school year 2009-2010. Significance of the Study To get a better idea of the effects of texting on teenagers and how much this technology was actually being used, a survey was conducted in Aldersgate College Science High School Solano, Nueva Vizcaya. Seventy- two First Year high School students were asked questions about their usage of texting and instant messaging. To ensure the honesty of the answers, the surveys were anonymous and the students were told that their answers would not be used against them. Summary After floating questionnaires regarding the effect of texting to the spelling proficiency of first year student of Aldersgate College Science High School, the researchers came up with the following significant data that texting habits had diminished the spelling proficiency of the students. Conclusion The researcher therefore conclude that texting habits affect the declining spelling proficiency of students, text languages often confuse the students with the correct spelling of the words leading to usually misspelled words. Recommendation The researcher would like to recommend that a further study about the effect of texting in the spelling proficiency of students would be done in a longer period of time with a larger number of respondents who should be observed in an adequate period of time. Writing, a linguistically complex skill, draws heavily upon our cognitive abilities. Dr. Mel Levine confirms this in his book, A Mind at a Time, (2002), by stating that â€Å"Writing is one of the largest orchestras a kid’s mind has to conduct.† Does text messaging harm students writing skills? Yes. I believe students are carrying over the writing habits they pick up through text messaging into school assignments. Maybe. Although text messaging may have some impact on how students write, I dont think its a significant problem. No. I believe students can write one way to their friends and another way in class. They can keep the two methods separate. None of the above. (Comment below.) Not only is texting used for person-to-person communication, but a number of groups have jumped onto the craze in recent years. Political campaigns, for example, have used it as a way to keep their supporters up to speed on events as they happen. Protesters and organizers have used text messaging as a way to stay connected during actions, mobilizing large groups of people in real time. Various businesses allow users to sign up for updates via text, or to receive bills this way. It can be used to stay up to speed on stock prices, sports scores, and any number of other small bits of data that change rapidly.

Sunday, July 21, 2019

Data flow diagram

Data flow diagram Data Flow Diagram DFD is a system modeling tool, the most popular and important representations in data flow modeling. DFD allows us to picture a system as a network of functional processes, connected to one another by pipelines and holding tanks of data. It is a structured, diagrammatic technique representing external entities, logical storage, data sinks and data flows in the system. You can also call DFD as: bubble chart, bubble diagram, process model, and work flow diagram. Data Flow Diagram Types Physical Data Flow Diagram: Physical data flow diagrams are implementation-dependent and show the actual devices, department, people, etc., involved in the current system. Logical or Conceptual Data Flow Diagram: Logical data flow diagram represents business functions or processes. It describes the system independently of how it is actually implemented, and focuses rather on how an activity is accomplished. The components of the data flow diagram (DFD) Processes: The basic processing items of a data flow diagram. They are used to transform incoming data flows into outgoing data flows. Processes that are not further decomposed have to be described by means of a textual specification. This text defines how the input data of the process are transformed into output data. Terminators: Data producers (data sources) or data consumers (data sinks) outside of the system Data flows: Logical channels (pipelines) in which data are transported; they are represented by arrows connecting the processes; Data store: Storage space from which data can be read with a time delay after writing them; without processing component. Data flow diagrams are useful if: you have lots of calculations to carry out You are familiar with data flow techniques in a method you have used repeatedly before. The approach to data flow diagramming should be as follows: create a data flow diagram for each of the major outputs of the system work back from the outputs to the inputs to construct the diagram add new objects where necessary to the object model as you discover the need for them in the data flow modeling add new operations and attributes to the object model as you discover the need for them in the data flow modelling Data Flow Description The data flow symbol is a line with an arrow showing the direction of flow. It should be named using words that are understood within the department or organization describing the data. The data that leaves one process is exactly that which arrives at the next process. An arrow usually at the end of the flow line indicates direction of flow. External Entity Definition The external entity is a source or recipient of data that is outside the boundary of investigation. The fundamental purpose of this symbol is to indicate that whatever happens at the end of the data flow, Entity Relationship Diagram A logical data model is documented as an entity relationship model supported by the data items for each entity (conventionally in the form of a Third Normal Form relation).Though the relationship among data store is not emphasized in data flow diagram, it is well reflected in ERD. ERD is one of the most useful model forming tools to organize this discussion. ERD is network model that describes stored data of a system at a high level of abstraction. For system analyst, ERD has a major benefit: it highlights the relationship between data stores on DFD which would otherwise only be seen in the specification process. The main components of an ERD include: Entity- a subject, a duty, or an event that has a significant meaning to the future system Attribute the characteristics of the entity displayed by fields or columns of a table. Relationship- There is 3 major types of relationship used in ERDs: One one relationship One many relationship Many many relationship Entity- is any type of object that we wish to store data about. Which entity types you decide to include on your diagram depends on your application. In an accounting application for a business you would store data about customers, suppliers, products, invoices and payments and if the business manufactured the products, you would need to store data about materials and production steps. Each of these would be classified as an entity type because you would want to store data about each one. In an entity-relationship diagram an entity type is shown as a box. There may be many entity types in an entity-relationship diagram. The name of an entity type is singular since it represents a type. Attributes-The data that we want to keep about each entity within an entity type is contained in attributes. An attribute is some quality about the entities that we are interested in and want to hold on the database. In fact we store the value of the attributes on the database. Each entity within the entity type will have the same set of attributes, but in general different attribute values. For example the value of the attribute ADDRESS for a customer J. Smith in a CUSTOMER entity type might be 10 Downing St., London whereas the value of the attribute address for another customer J. Major might be 22 Railway Cuttings, Cheam. Cardinality and Optionality The maximum degree is called cardinality and the minimum degree is called Optionality. In another context the terms degree and cardinality have different meanings. In [Date 4th ed. p240] degree is the term used to denote the number of attributes in a relation while `cardinality is the number of tipples in a relation. Here, we are not talking about relations (database tables) but relationship types, the associations between database tables and the real world entity types they model. Entity Descriptions CustomerInfo This entity is to store the personal Name, Address etc in to check in the Hostel. Stock This entity is to store the detail of stock items in order to check new item. Check In This entity is to store the Customer information that was get to the Room. Booking This entity is to store the Room that have been booking according to customers order. Room This entity is to store the Room information of the Hostel status. Sale Service This entity is to store the sale record of each customer and the item. Entity Life History The ELH technique is based on concepts developed by Michael Jackson for structured program design. The essential idea is that all data processing can be described in terms of sequence (order), selection (choice) and iteration (repetition) of processing components, which are derived from the data structures. In an ELH these ideas are used by analogy to model sequences, selections and iterations of events affecting an entity. In between the birth and death events there may be a number of life events. Jackson rules are observed in that the diagram shows that it is possible for there to be no changes between creation and end of life for a particular instance, as an iteration may occur zero, one or many times. Parallel lives are used when there are two (or more) independent sets of events that can affect an entity. As events from the two sets are not dependent on each other, but only on events from their own set, they cannot be ordered together in a predictable way. Quits and resumes are a means of jumping from one part of the diagram to another in order to cope with exceptional or unusual events. If used indiscriminately they can undermine the apparent structure of the diagram and make it more difficult to understand. Analysts should therefore use a quit and resume only when they are sure that there is no sensible way in which they can use normal Jackson structures to show what they want. Normalisation Any collection of attributes can be said to be either unnormalised or in a particular normal form depending on its compliance with the rules given below. Many normal forms have been defined. Codd originally defined first, second and third normal forms. There are some cases, particularly where keys are complex and contain many attributes, where further normalization may be required. For such cases, Boyce- Codd normal form, fourth normal form and fifth normal form also exist. In this book, normalization will only be covered up to the third normal form, since this is sufficient for most practical purposes. For further information about the other normal forms the reader is referred to Data (2000). Report for National Hostelling Association There are a lot of advantages when compared to the manual system and computer based system. Although our system is still window based.We will be able to room service, Ecommerce application in the future. We have made the information giving in our system easily understandable for new uses of the system. The display area of the store is not very large. It will only display a small section of the Room Other Sale Item that the Check in the Hostel room. My system will allow customers to choice rooms . This will allow customers to request room and service that are not on display. The customer can search the room by keyword or by category such as room,Booking No, CheckIn/Out etc or by charges. The charges will take the discount value for each invoice ie, the For manager, he will help with this activities during busy periods and will be responsible for the general management duties such as accounting, correspondence, staffing etc. The manager will also Booking and necessary from a number of customers and he will also decide which of the services will be discount prices to the customer. In order to do this, he will need the information from the system. The system also produce the monthly report in order to estimate the rooms status and the customer like and dislike of the services ie the c ustomer trend according to the season. These above information are explained about the functions of the National Hostelling Association. To have a successful system design for the National Hostelling Association, I have to study the manual system first. Then I draw the context diagram. The context diagram shows the entire system as a single process surrounded by the external entities. The National Hostelling Association, Context diagram represents data input and output flows. This make to concentrate or focus on the boundary to be investigated. This can give great help in discussing with the user on the scope of the system. Aims and objectives are given to the system so the system user cannot depart from the system needs. As the context diagram is drawn, the level 1 DFD are also easily drawn to make the system easy to look and to be understandable. This can enhance the clarity of the system to the user. Then I create a data model to support the system information. It points the ways how the data items are grouped together into entities and identifies the relationships between the entities. To get the attributes for the entities, I studied the manual records and the receipts of the National Hostelling Association. Additional characteristics such as the optionality and degrees of relationship are needed to identify for the entities. Then I studied how the entities change with time.ELH is described to know the creation of an entity occurrences, record the sequence of changes in the system during its life time and how it ends in the system. Then I do the normalization which provides the sound foundation for physical design which can be implemented as the database design. For database design, all entities are included in the data dictionary which is the sources of information of the system. Then, I create the prototype by using Visual Studio 2005. It includes searching for an item of National Hostelling Association. I also take the screenshots of the prototype and identify where the system needs the validation rule. All the tasks shown above demonstrate an understanding of the modeling and installation of the data driven system. They demonstrate the analysis and design of a system including the prototype use interface, training plans for the users. Preparation for the installation of the system In order to install the system, we first need to install the hardware first. Then we need to do the data entry for the items. We also need to install the software required to run the system. The requirements are as follows: Hardware Requirements Pentium IV or above Processors is C.P.U. 1.8 GHz or above. Memory (RAM) is 512MB or above. Hard Disk space is 1GB for my system and 10 GB for Operation System. Software Requirements Window XP Visual Studio 2005 Microsoft Office Word 2003 for reading Manual Guide

Overview of Green Wireless Networks

Overview of Green Wireless Networks Abstract: Traditional mobile networks largely focus on availability, variety, stability and large capacity. Due to the rapid development of Information and Communication Technology (ICT) industry whose major constituent are the mobile networks, CO2 emissions have been increasing rapidly. This shows the need for energy efficient wireless networks or green wireless networks which will put emphasis on saving the energy and environmental protection. The current wireless networks concentrates mainly on non-energy related factors such as Quality of Service (QoS), throughput and reliability. So these factors have to kept in mind while transitioning to green wireless networks. The techniques that need to be implemented are aimed at improving energy efficiency but not compromising the QoS, throughput and reliability.   In this paper the various metrics which help in evaluating performance of wireless networks are reviewed. Also different approaches to improve energy efficiency in wireless networks an d how to combine them for higher energy efficiency are discussed. Introduction: The latest mobile phones provide multiple services which led to rise of ICT traffic. ICT constitutes for 2% of total Green House Gas (GHG) and CO2 emissions worldwide. Within the ICT sector, mobile sector was responsible for 43% of emissions until 2002 while studies suggest that this number would go up to 51% by 2020 [1]. The predominant energy consuming part in a wireless network is the Radio Access Network (RAN). This comes from the fact the RF power amplifier within the RAN consumes a lot of input power for operation and releases a lot of heat contributing to energy wastage. In addition to this, the inconsistent distribution of real world mobile traffic among the BSs leads to underutilization of supplied energy [1]. These two reasons give us an idea of where the energy is being wasted or not utilized, helping us in formulating new techniques for energy efficient wireless networks. While discussing about various techniques for energy efficiency, we have to keep in mind that the QoS is not compromised at all. Because if an operator uses a technique, they should be able to serve the users by utilizing less energy but not by compromising users service. The various parts of a mobile network that consume power are data centers in backhaul, macro cells, femtocells, mobile stations or end hosts and their services. But the major part that consumes the highest power is the power amplifier section and Base station or RAN section. Hence the various techniques presented in this paper are aimed at energy efficiency in these sections only. Section II of the paper outlines various metrics which can be used to evaluate the energy performance of systems. Section III discusses cell layout adaptation techniques for reducing energy consumption and is divided into 3-subsections that outline various cell shaping algorithms. Section IV explains some challenges and research directions for energy efficient networks like Cognitive Radio (CR), M2M communication etc. Metrics for measuring energy performance: Energy efficiency can be achieved by employing better techniques. But in order to measure the energy consumption or utilization, metrics are needed. Energy efficiency metric can be defined as ratio of output to the input power supplied [1]. The output here may correspond to how much the distance of transmission is, number of bits transmitted or output power etc. The metrics for energy efficiency are broadly categorized into 3 levels: Component level metrics, Access node level metrics and network level metrics. Component level metrics mainly focus on power amplifier section, Access node level metrics focus on RAN or Base station, and network level metrics focus on performance of RAN [1]. These metrics help to quantify energy efficiency of various devices and therefore it becomes easier to compare which technique is better. Firstly, at the component level, where we focus on power amplifier section, there are 2 possible types of metric categories. One is Analog and the other is digital. The two important metrics of analog RF transmission are Power Amplifier efficiency (PA efficiency) and Peak to average power ratio (PAPR). PA efficiency is the ratio of PA output power to the input power supplied to it. Higher value of PA efficiency is desired, but in reality this is the part where most of the input power is wasted. PAPR, as the name suggests is the ratio of Peak power to average power. Lower value of PAPR is desired, as higher values tend to reduce the amplifier efficiency. The significant digital metrics in component level are millions of instructions per second per watt (MIPS/W) and millions of floating point operations per second per watt (MFLOPS/W). Higher value of MIPS/W and MFLOPS/W are desired as they refer to digital output generated for a given power input [1]. Secondly, at access node level there are 2 major metrics. Power efficiency and radio efficiency. Power efficiency refers to transmitted data rate over a given bandwidth and input power supplied. It is measured in bits per second per hertz per watt (b/S/Hz/W). Radio efficiency refers to transmitted data rate and transmitted distance over a given bandwidth and input power supplied. It is measured in bits meters per second per hertz per watt (b-m/S/Hz/W) [1]. Higher values of power and radio efficiency are desired as they measure the data rate and distance of transmission which are always desired to be a higher value for a given power input. Finally, at the network level also there are 2 metrics which measure the number of subscribers served during peak hours and coverage area respectively. The first metric measures the number of subscribers served during peak hour to the supplied input power and is measured in number of subscribers per watt (Subscribers/W) and the second metric measures the coverage area of the radio signal to the supplied input power and is measured in square meters per watt (m2/W) [1]. Higher values for both these metrics are desired as they signify serving more number of subscribers or a larger coverage area for a given power input. Hence when evaluating various techniques for wireless energy efficiency, it is better to know at whether energy efficiency is augmented in component level or access node level or network level. That way it would become easier to compare the efficiency in terms of various levels individual metrics. Reducing Energy consumption through Cell Layout Adaptation: Cell layout adaptation (CLA) techniques focus on energy efficiency at network level. But for these techniques to improve energy efficiency, it is important to improve efficiency in component level and access node level as well, because all these 3 levels are inter-related to each other and one works on the basis of another. Power is first supplied from power amplifier and then to RAN and at last to the network level, that means it is possible to save more energy in component level and access node level and the remaining energy that is used by the network can be efficiently utilized by implementing these cell layout adaptation techniques. CLA techniques are basically divided into 3 major categories. First part consists of cell shaping techniques like Base Stations (BSs) turning off and cell breathing, second part consists of hybrid macro femtocell deployments and the final part consists of relaying techniques [1]. A. Cell Shaping Techniques: As mentioned earlier, base stations turning off and cell breathing techniques encompass cell shaping techniques. The basic idea behind the former is turning off BSs and redistributing the remaining traffic to neighboring base stations. Here we need to make sure that we are turning off BSs which are idle or the ones which have very less traffic that can be taken up by neighboring cells. This way energy consumption is reduced and only the BSs that have traffic will operate and consume energy. Cell breathing scheme goes one step further by not actually turning off BSs, but by reducing the power consumption of a cell. This can be achieved by covering a low distance depending on the traffic. That means BSs experiencing higher traffic operate in full power mode while the BSs with medium traffic operate in medium power mode and cells with very less traffic operate at low power mode, thereby reducing the coverage area depending on subscriber traffic. This is analogous to a cell breathing acc ording to traffic patterns. As these cell shaping techniques are based on network level, number of subscribers served and coverage area metrics should be maintained in order to ensure good QoS and less call drop rate when implementing these techniques. The broader explanation of cell shaping techniques is mentioned above, but to implement those techniques there are 2 major algorithms: switching-off network planning algorithm and cell breathing coordination algorithm [1]. Firstly, switching-off network planning algorithm works on the basis of 3 factors, number of BSs to turn off, number of BSs to operate, and time period for which BSs are turned off. The ratio of number of BSs to turn off and BSs to operate and a specific time period for which turn-off is implemented based on the low traffic pattern is calculated. Once these values are calculated, it is made sure that the blocking probability limit is not exceeded, which means definite QoS is maintained. Cell breathing coordination algorithm works on the basis of a central node called a cell zooming server. The cell zooming server analyzes the incoming traffic and tries to turn of the BSs which do not have any traffic at all. Then it tries to distribute the traffic from less active BSs to busy BSs. It also makes sure to distribute traffic based on input traffic and turns on the sleep mode BSs when required. This centralized approach works good in smaller networks and when it comes to large scale networks, it would be very ineffective. The same applies to switching off network planning algorithm because there is no centralized node to turn on the BSs if needed, as the turn -off time if fixed based on traffic patterns [1]. The cell shaping techniques also bring up a new trade-off, i.e. SE-EE tradeoff (spectral efficiency-energy efficiency) [3]. SE-EE trade-off focuses on network level characteristics like number of subscribers served and coverage area for input power supplied. By implementing these cell shaping techniques although energy efficiency is obtained, there is always chance where coverage area is reduced and some subscribers are ignored. Ideally, higher the energy efficiency lower is the spectral efficiency. But in reality, because of component level energy issues, transmission distances, coding schemes the relationship between SE and EE is not inversely proportional, but it is of the form of a bell curve. So it is better to apply cell shaping techniques until the point where spectral efficiency is not compromised. B. Hybrid macro femtocell deployment: Femtocell deployment in combination with macro cells is a second method under cell layout adaptation. Femtocell deployments provide coverage in the order of 10 meters and utilize a small BS, which requires less power to operate. Femtocell deployment is advantageous as it provides good coverage and QoS to a set of users within its range with less operating expenses when compared to a macro BS [1]. Although femtocell deployment is a good concept, it is not desirable to have too many femtocells as it increases the power consumption and utilizes more network resources for a lesser coverage area. A better way of deployment is having hybrid macro and femtocell deployment. In the case of hybrid deployment, the macro BS provides coverage to users who are evenly spread over a long distance and the femtocell provides coverage to users who are located in a dense region. This way the energy is utilized efficiently, as a new macro BS is not being deployed to provide coverage to those dense set of users. The hybrid macro cell and -femtocell deployment poses a new challenge for handoffs, as macro BS and femtocell BS might have same signal strength in the others coverage region. The handoffs issue can be solved by defining a clear boundary between the macro and femtocell BS. Within the dense region, the femtocell should have higher signal strength and it should properly handoff at the bounda ry of macro BS. Also within the coverage area of macro BS, the femtocell BS should have very less signal strength [1]. This would give a clear idea to define a boundary. A better way of implementing this hybrid deployment is by utilizing the cell shaping techniques like BS turning off and cell breathing coordination. If there are a set of femtocells, and one of the coverage area is totally idle, then that femtocell BS can be turned off and basic coverage is provided by the macro BS at that location. Similarly, if incoming traffic is analyzed, femtocells and macro cells can use the cell breathing techniques to lower their power utilization [1]. Also the hybrid macro and femtocell deployment leads to a DE-EE tradeoff (deployment efficiency-energy efficiency) [3]. Ideally energy efficiency increases when more femtocells are deployed and deployment efficiency goes down because of increase in deployment expenses, network utilization and energy consumption. In a practical scenario, the relationship between DE and EE is more like a bell curve, with a peak point where deployment and energy efficiency are in good standing. Hence it is a good idea to use hybrid deployment until the point where it does not degrade the deployment efficiency and energy efficiency. C. Relaying techniques: Energy efficiency can be achieved through 2 types of relaying techniques. The first technique uses repeater stations or green antennas for relaying and the second technique uses mobile stations for relaying. In the first technique, a repeater station or a green antenna with receiver capability is connected to the macro BS through a coax cable or optical fiber, with the latter utilizing less energy. These green antennas are placed very near to the mobile stations, which helps to reduce the energy consumption in uplink by the mobile stations. Although this technique improves energy efficiency for mobile stations, it increases operating expenses for the service provider. In the second technique, the mobile stations work in coordination and perform the relaying operation. This way the transmission distance for the macro BS is reduced and it consumes less energy. Although this technique assumes mobile stations as relays which work selflessly. Practically, the mobile stations may not work in coordination which would break the link for relaying. One more drawback of this technique is that for maintaining coordination between the mobile stations, more energy is consumed [1]. Challenges and directions for energy efficient wireless networks: Cognitive Radio (CR) and M2M (Machine to Machine) communication systems provide new opportunities in the field of green wireless networks, but also pose significant challenges at the same time. Cognitive Radio can be defined as a RF transceiver that is used to switch users from a very busy spectrum to an unused one and vice versa if needed. The origin for this topic came from the fact that many RF spectrums are congested with several users and some other spectrums are underutilized. Hence the concept of CR would efficiently manage users in various spectrums and help to deliver better QoS. Indirectly this switching of spectrums or utilizing unused spectrums is resulting in energy efficiency as spectrums with more users will not utilize additional energy as users are transferred to other spectrum. Also underutilized spectrums which were consuming energy for operation, now serve the new users efficiently resulting in energy and spectrum utilization. The only disadvantage of CR technique is that monitoring various RF spectrums and switching users from one spectrum to another requires significant energy. Hence this technique would be energy efficient only if more energy is saved by intelligently switching users than that is utilized for monitoring spectrums or users [2]. M2M wireless communication systems are aimed at connecting various wireless devices directly. This approach also helps in reducing energy consumption from the point of view of a mobile station. M2M helps to reduce the computation required by various physical devices and also tries to offload them to the network itself. This way the mobile stations consume less energy as the number of computations is reduced. The major disadvantage with this approach is that if more computation is offloaded to the main network, it might consume more energy that that is being saved by mobile stations by utilizing this approach. Hence this technique would be energy efficient if the main network does not consume a lot of energy for some additional computations [2]. Conclusion and Future Scope: The rise in carbon footprint, especially the contribution to it from the ICT sector and consequently mobile sector led to interest in energy efficient wireless networks. Energy efficiency can be achieved at various levels such as power amplifier, RAN and network. The techniques proposed in the paper focus on energy efficiency in RAN and network levels. But they also have trade-offs like DE-EE and SE-EE, which can be vanquished by emerging techniques like CR and M2M communications. These emerging techniques can be improved in a way where they consume less energy for monitoring in comparison with the prevailing levels. Alongside that, at the power amplifier level, the current solution for energy efficiency is to use expensive components which would trade off the gains achieved by energy savings. Hence a future research direction would be addressing energy efficiency at power amplifier level and improving CR and M2M techniques. VI. References: [1] Luis .S, Nuaymi .L, and Bonnin .J, An overview and classification of research approaches in green wireless networks. Eurasip journal on wireless communications and networking 2012.1 (2012): pp.1-18. [2] Xiaofei .W, et al. A survey of green mobile networks: Opportunities and challenges. Mobile Networks and Applications 17.1 (2012): pp.4-20. [3] Yan Chen; Shunqing Zhang; Shugong Xu; Li, G.Y., Fundamental trade-offs on green wireless networks, in Communications Magazine, IEEE , vol.49, no.6, pp.30-37, June 2011.