Value at Risk calculations
The potential loss in value of a risky asset or portfolio over a defined period for a given confidence interval. Historical and Monte Carlo Simulation. Value at Risk as a risk assessment tool at banks and other financial service firms in the last decade.
Рубрика | Экономико-математическое моделирование |
Вид | эссе |
Язык | английский |
Дата добавления | 25.02.2014 |
Размер файла | 44,4 K |
Отправить свою хорошую работу в базу знаний просто. Используйте форму, расположенную ниже
Студенты, аспиранты, молодые ученые, использующие базу знаний в своей учебе и работе, будут вам очень благодарны.
Размещено на http://www.allbest.ru/
FINANCE UNIVERSITY UNDER THE GOVERNMENT OF THE RUSSIAN FEDERATION
DEPARTMENT OF FOREIGN LANGUAGES - 1
ESSAY
“Value at Risk calculations”
Done by: Петухов Максим Сергеевич
Supervisor: Кантышева А.А.
Moscow 2014
Оглавление
- 1. What is VaR?
- 2. History of VaR
- 3. Measuring Value at Risk
- Variance-Covariance Method
- Historical Simulation
- Monte Carlo Simulation
- 4. Comparing Approaches
- 5. Limitations of VaR
- 6. VaR can be wrong
- 7. Criticism
- 8. Conclusion
- Bibliography
1. What is VaR?
value simulation risk bank
In its most general form, the Value at Risk measures the potential loss in value of a risky asset or portfolio over a defined period for a given confidence interval. Thus, if the VaR on an asset is $ 100 million at a one-week, 95% confidence level, there is a only a 5% chance that the value of the asset will drop more than $ 100 million over any given week. In its adapted form, the measure is sometimes defined more narrowly as the possible loss in value from “normal market risk” as opposed to all risk, requiring that we draw distinctions between normal and abnormal risk as well as between market and non-market risk.
While Value at Risk can be used by any entity to measure its risk exposure, it is used most often by commercial and investment banks to capture the potential loss in value of their traded portfolios from adverse market movements over a specified period; this can then be compared to their available capital and cash reserves to ensure that the losses can be covered without putting the firms at risk.
Taking a closer look at Value at Risk, there are clearly key aspects that mirror our discussion of simulations in the last chapter:
1. To estimate the probability of the loss, with a confidence interval, we need to define the probability distributions of individual risks, the correlation across these risks and the effect of such risks on value. In fact, simulations are widely used to measure the VaR for asset portfolio.
2. The focus in VaR is clearly on downside risk and potential losses. Its use in banks reflects their fear of a liquidity crisis, where a low-probability catastrophic occurrence creates a loss that wipes out the capital and creates a client exodus. The demise of Long Term Capital Management, the investment fund with top pedigree Wall Street traders and Nobel Prize winners, was a trigger in the widespread acceptance of VaR.
3. There are three key elements of VaR - a specified level of loss in value, a fixed time period over which risk is assessed and a confidence interval. The VaR can be specified for an individual asset, a portfolio of assets or for an entire firm.
4. While the VaR at investment banks is specified in terms of market risks - interest rate changes, equity market volatility and economic growth - there is no reason why the risks cannot be defined more broadly or narrowly in specific contexts. Thus, we could compute the VaR for a large investment project for a firm in terms of competitive and firm-specific risks and the VaR for a gold mining company in terms of gold price risk.
In the sections that follow, we will begin by looking at the history of the development of this measure, ways in which the VaR can be computed, limitations of and variations on the basic measures and how VaR fits into the broader spectrum of risk assessment approaches.
2. History of VaR
While the term “Value at Risk” was not widely used prior to the mid-1990s, the origins of the measure lie further back in time. The mathematics that underlie VaR were largely developed in the context of portfolio theory by Harry Markowitz.
The first regulatory measures that evoke Value at Risk, though, were initiated in 1980, when the SEC tied the capital requirements of financial service firms to the losses that would be incurred, with 95% confidence over a thirty-day interval, in different security classes; historical returns were used to compute these potential losses. Although the measures were described as haircuts and not as Value or Capital at Risk, it was clear the SEC was requiring financial service firms to embark on the process of estimating one-month 95% VaRs and hold enough capital to cover the potential losses.
At about the same time, the trading portfolios of investment and commercial banks were becoming larger and more volatile, creating a need for more sophisticated and timely risk control measures. Ken Garbade at Banker's Trust, in internal documents, presented sophisticated measures of Value at Risk in 1986 for the firm's fixed income portfolios, based upon the covariance in yields on bonds of different maturities. By the early 1990s, many financial service firms had developed rudimentary measures of Value at Risk, with wide variations on how it was measured. In the aftermath of numerous disastrous losses associated with the use of derivatives and leverage between 1993 and 1995, culminating with the failure of Barings, the British investment bank, as a result of unauthorized trading in Nikkei futures and options by Nick Leeson, a young trader in Singapore, firms were ready for more comprehensive risk measures. In 1995, J.P. Morgan provided public access to data on the variances of and covariances across various security and asset classes, that it had used internally for almost a decade to manage risk, and allowed software makers to develop software to measure risk. It titled the service “RiskMetrics” and used the term Value at Risk to describe the risk measure that emerged from the data. The measure found a ready audience with commercial and investment banks, and the regulatory authorities overseeing them, who warmed to its intuitive appeal. In the last decade, VaR has becomes the established measure of risk exposure in financial service firms and has even begun to find acceptance in non-financial service firms.
3. Measuring Value at Risk
There are three basic approaches that are used to compute Value at Risk, though there are numerous variations within each approach. The measure can be computed analytically by making assumptions about return distributions for market risks, and by using the variances in and covariances across these risks. It can also be estimated by running hypothetical portfolios through historical data or from Monte Carlo simulations. In this section, we describe and compare the approaches.
Variance-Covariance Method
Since Value at Risk measures the probability that the value of an asset or portfolio will drop below a specified value in a particular time period, it should be relatively simple to compute if we can derive a probability distribution of potential values. That is basically what we do in the variance-covariance method, an approach that has the benefit of simplicity but is limited by the difficulties associated with deriving probability distributions.
General Description
Consider a very simple example. Assume that you are assessing the VaR for a single asset, where the potential values are normally distributed with a mean of $ 120 million and an annual standard deviation of $ 10 million. With 95% confidence, you can assess that the value of this asset will not drop below $ 80 million (two standard deviations below from the mean) or rise about $120 million (two standard deviations above the mean) over the next year.2 When working with portfolios of assets, the same reasoning will apply but the process of estimating the parameters is complicated by the fact that the assets in the portfolio often move together. As we noted in our discussion of portfolio theory in chapter 4, the central inputs to estimating the variance of a portfolio are the covariances of the pairs of assets in the portfolio; in a portfolio of 100 assets, there will be 49,500 covariances that need to be estimated, in addition to the 100 individual asset variances. Clearly, this is not practical for large portfolios with shifting asset positions.
The strength of the Variance-Covariance approach is that the Value at Risk is simple to compute, once you have made an assumption about the distribution of returns and inputted the means, variances and covariances of returns. In the estimation process, though, lie the three key weaknesses of the approach:
* Wrong distributional assumption: If conditional returns are not normally distributed, the computed VaR will understate the true VaR. In other words, if there are far more outliers in the actual return distribution than would be expected given the normality assumption, the actual Value at Risk will be much higher than the computed Value at Risk.
* Input error: Even if the standardized return distribution assumption holds up, the VaR can still be wrong if the variances and covariances that are used to estimate it are incorrect. To the extent that these numbers are estimated using historical data, there is a standard error associated with each of the estimates. In other words, the variance-covariance matrix that is input to the VaR measure is a collection of estimates, some of which have very large error terms.
* Non-stationary variables: A related problem occurs when the variances and covariances across assets change over time. This nonstationarity in values is not uncommon because the fundamentals driving these numbers do change over time. Thus, the correlation between the U.S. dollar and the Japanese yen may change if oil prices increase by 15%. This, in turn, can lead to a breakdown in the computed VaR. Not surprisingly, much of the work that has been done to revitalize the approach has been directed at dealing with these critiques.
Historical Simulation
Historical simulations represent the simplest way of estimating the Value at Risk for many portfolios. In this approach, the VaR for a portfolio is estimated by creating a hypothetical time series of returns on that portfolio, obtained by running the portfolio through actual historical data and computing the changes that would have occurred in each period.
General Approach
To run a historical simulation, we begin with time series data on each market risk factor, just as we would for the variance-covariance approach. However, we do not use the data to estimate variances and covariances looking forward, since the changes in the portfolio over time yield all the information you need to compute the Value at Risk.
Assessment
While historical simulations are popular and relatively easy to run, they do come with baggage. In particular, the underlying assumptions of the model generate give rise to its weaknesses.
a. Past is not prologue: While all three approaches to estimating VaR use historical data, historical simulations are much more reliant on them than the other two approaches for the simple reason that the Value at Risk is computed entirely from historical price changes. There is little room to overlay distributional assumptions (as we do with the Variance-covariance approach) or to bring in subjective information (as we can with Monte Carlo simulations). The example provided in the last section with oil prices provides a classic example. A portfolio manager or corporation that determined its oil price VaR, based upon 1992 to 1998 data, would have been exposed to much larger losses than expected over the 1999 to 2004 period as a long period of oil price stability came to an end and price volatility increased.
b. Trends in the data: A related argument can be made about the way in which we compute Value at Risk, using historical data, where all data points are weighted equally. In other words, the price changes from trading days in 1992 affect the VaR in exactly the same proportion as price changes from trading days in 1998. To the extent that there is a trend of increasing volatility even within the historical time period, we will understate the Value at Risk.
c. New assets or market risks: While this could be a critique of any of the three approaches for estimating VaR, the historical simulation approach has the most difficulty dealing with new risks and assets for an obvious reason: there is no historic data available to compute the Value at Risk. Assessing the Value at Risk to a firm from developments in online commerce in the late 1990s would have been difficult to do, since the online business was in its nascent stage.
Monte Carlo Simulation
In the last chapter, we examined the use of Monte Carlo simulations as a risk assessment tool. These simulations also happen to be useful in assessing Value at Risk, with the focus on the probabilities of losses exceeding a specified value rather than on the entire distribution.
General Description
The first two steps in a Monte Carlo simulation mirror the first two steps in the Variance-covariance method where we identify the markets risks that affect the asset or assets in a portfolio and convert individual assets into positions in standardized instruments. It is in the third step that the differences emerge. Rather than compute the variances and covariances across the market risk factors, we take the simulation route, where we specify probability distributions for each of the market risk factors and specify how these market risk factors move together. Thus, in the example of the six-month Dollar/Euro forward contract that we used earlier, the probability distributions for the 6-month zero coupon $ bond, the 6-month zero coupon euro bond and the dollar/euro spot rate will have to be specified, as will the correlation across these instruments.
While the estimation of parameters is easier if you assume normal distributions for all variables, the power of Monte Carlo simulations comes from the freedom you have to pick alternate distributions for the variables. In addition, you can bring in subjective judgments to modify these distributions.
The strengths of Monte Carlo simulations can be seen when compared to the other two approaches for computing Value at Risk. Unlike the variance-covariance approach, we do not have to make unrealistic assumptions about normality in returns. In contrast to the historical simulation approach, we begin with historical data but are free to bring in both subjective judgments and other information to improve forecasted probability distributions. Finally, Monte Carlo simulations can be used to assess the Value at Risk for any type of portfolio and are flexible enough to cover options and option-like securities.
Assessment
Much of what was said about the strengths and weaknesses of the simulation approach in the last chapter apply to its use in computing Value at Risk. Quickly reviewing the criticism, a simulation is only as good as the probability distribution for the inputs that are fed into it. While Monte Carlo simulations are often touted as more sophisticated than historical simulations, many users directly draw on historical data to make their distributional assumptions. The strengths of Monte Carlo simulations can be seen when compared to the other two approaches for computing Value at Risk. Unlike the variance-covariance approach, we do not have to make unrealistic assumptions about normality in returns. In contrast to the historical simulation approach, we begin with historical data but are free to bring in both subjective judgments and other information to improve forecasted probability distributions. Finally, Monte Carlo simulations can be used to assess the Value at Risk for any type of portfolio and are flexible enough to cover options and option-like securities.
4. Comparing Approaches
Each of the three approaches to estimating Value at Risk has advantages and comes with baggage. The variance-covariance approach, with its delta normal and delta gamma variations, requires us to make strong assumptions about the return distributions of standardized assets, but is simple to compute, once those assumptions have been made.
The historical simulation approach requires no assumptions about the nature of return distributions but implicitly assumes that the data used in the simulation is a representative sample of the risks looking forward. The Monte Carlo simulation approach allows for the most flexibility in terms of choosing distributions for returns and bringing in subjective judgments and external data, but is the most demanding from a computational standpoint.
5. Limitations of VaR
While Value at Risk has acquired a strong following in the risk management community, there is reason to be skeptical of both its accuracy as a risk management tool and its use in decision making.
6. VaR can be wrong
There is no precise measure of Value at Risk, and each measure comes with its own limitations. The end-result is that the Value at Risk that we compute for an asset, portfolio or a firm can be wrong, and sometimes, the errors can be large enough to make VaR a misleading measure of risk exposure. The reasons for the errors can vary across firms and for different measures and include the following.
a. Return distributions: Every VaR measure makes assumptions about return distributions, which, if violated, result in incorrect estimates of the Value at Risk. With delta-normal estimates of VaR, we are assuming that the multivariate return distribution is the normal distribution, since the Value at Risk is based entirely on the standard deviation in returns. With Monte Carlo simulations, we get more freedom to specify different types of return distributions, but we can still be wrong when we make those judgments. Finally, with historical simulations, we are assuming that the historical return distribution (based upon past data) is representative of the distribution of returns looking forward.
b. History is not a good predictor: All measures of Value at Risk use historical data to some degree or the other. In the variance-covariance method, historical data is used to compute the variance-covariance matrix that is the basis for the computation of VaR. In historical simulations, the VaR is entirely based upon the historical data with the likelihood of value losses computed from the time series of returns. In Monte Carlo simulations, the distributions don't have to be based upon historical data but it is difficult to see how else they can be derived. In short, any Value at Risk measure will be a function of the time period over which the historical data is collected. If that time period was a relatively stable one, the computed Value at Risk will be a low number and will understate the risk looking forward. Conversely, if the time period examined was volatile, the Value at Risk will be set too high. Earlier in this chapter, we provided the example of VaR for oil price movements and concluded that VaR measures based upon the 1992-98 period, where oil prices were stable, would have been too low for the 1999-2004 period, when volatility returned to the market.
7. Criticism
VaR has been controversial since it moved from trading desks into the public eye in 1994.
VaR is claimed to:
1. Ignored 2,500 years of experience in favor of untested models built by non-traders
2. Was charlatanism because it claimed to estimate the risks of rare events, which is impossible
3. Gave false confidence
4. Would be exploited by traders
More recently David Einhorn and Aaron Brown debated VaR in Global Association of Risk Professionals Review Einhorn compared VaR to “an airbag that works all the time, except when you have a car accident.” He further charged that VaR:
1. Led to excessive risk-taking and leverage at financial institutions
2. Focused on the manageable risks near the center of the distribution and ignored the tails
3. Created an incentive to take “excessive but remote risks”
4. Was “potentially catastrophic when its use creates a false sense of security among senior executives and watchdogs.”
New York Times reporter Joe Nocera wrote an extensive piece Risk Mismanagement] on January 4, 2009 discussing the role VaR played in the Financial crisis of 2007-2008. After interviewing risk managers (including several of the ones cited above) the article suggests that VaR was very useful to risk experts, but nevertheless exacerbated the crisis by giving false security to bank executives and regulators. A powerful tool for professional risk managers, VaR is portrayed as both easy to misunderstand, and dangerous when misunderstood.
Taleb, in 2009, testified in Congress asking for the banning of VaR on two arguments, the first that "tail risks are non-measurable" scientifically and the second is that for anchoring reasons VaR for leading to higher risk taking.
A common complaint among academics is that VaR is not sub additive. That means the VaR of a combined portfolio can be larger than the sum of the VaRs of its components. To a practising risk manager this makes sense. For example, the average bank branch in the United States is robbed about once every ten years. A single-branch bank has about 0.0004% chance of being robbed on a specific day, so the risk of robbery would not figure into one-day 1% VaR. It would not even be within an order of magnitude of that, so it is in the range where the institution should not worry about it, it should insure against it and take advice from insurers on precautions. The whole point of insurance is to aggregate risks that are beyond individual VaR limits, and bring them into a large enough portfolio to get statistical predictability. It does not pay for a one-branch bank to have a security expert on staff.
As institutions get more branches, the risk of a robbery on a specific day rises to within an order of magnitude of VaR. At that point it makes sense for the institution to run internal stress tests and analyze the risk itself. It will spend less on insurance and more on in-house expertise. For a very large banking institution, robberies are a routine daily occurrence. Losses are part of the daily VaR calculation, and tracked statistically rather than case-by-case. A sizable in-house security department is in charge of prevention and control, the general risk manager just tracks the loss like any other cost of doing business.
As portfolios or institutions get larger, specific risks change from low-probability/low-predictability/high-impact to statistically predictable losses of low individual impact. That means they move from the range of far outside VaR, to be insured, to near outside VaR, to be analyzed case-by-case, to inside VaR, to be treated statistically.
Even VaR supporters generally agree there are common abuses of VaR:
1. Referring to VaR as a "worst-case" or "maximum tolerable" loss. In fact, you expect two or three losses per year that exceed one-day 1% VaR.
2. Making VaR control or VaR reduction the central concern of risk management. It is far more important to worry about what happens when losses exceed VaR.
3. Assuming plausible losses will be less than some multiple, often three, of VaR. The entire point of VaR is that losses can be extremely large, and sometimes impossible to define, once you get beyond the VaR point. To a risk manager, VaR is the level of losses at which you stop trying to guess what will happen next, and start preparing for anything.
4. Reporting a VaR that has not passed a back test. Regardless of how VaR is computed, it should have produced the correct number of breaks (within sampling error) in the past. A common specific violation of this is to report a VaR based on the unverified assumption that everything follows a multivariate normal distribution.
8. Conclusion
Value at Risk has developed as a risk assessment tool at banks and other financial service firms in the last decade. Its usage in these firms has been driven by the failure of the risk tracking systems used until the early 1990s to detect dangerous risk taking on the part of traders and it offered a key benefit: a measure of capital at risk under extreme conditions in trading portfolios that could be updated on a regular basis.
While the notion of Value at Risk is simple- the maximum amount that you can lose on an investment over a particular period with a specified probability - there are three ways in which Value at Risk can be measured. In the first, we assume that the returns generated by exposure to multiple market risks are normally distributed. We use a variance-covariance matrix of all standardized instruments representing various market risks to estimate the standard deviation in portfolio returns and compute the Value at Risk from this standard deviation. In the second approach, we run a portfolio through historical data - a historical simulation - and estimate the probability that the losses exceed specified values. In the third approach, we assume return distributions for each of the individual market risks and run Monte Carlo simulations to arrive at the Value at Risk. Each measure comes with its own pluses and minuses: the Variance-covariance approach is simple to implement but the normality assumption can be tough to sustain, historical simulations assume that the past time periods used are representative of the future and Monte Carlo simulations are time and computation intensive. All three yield Value at Risk measures that are estimates and subject to judgment.
We understand why Value at Risk is a popular risk assessment tool in financial service firms, where assets are primarily marketable securities; there is limited capital at play and a regulatory overlay that emphasizes short term exposure to extreme risks. We are hard pressed to see why Value at Risk is of particular use to non-financial service firms, unless they are highly levered and risk default if cash flows or value fall below a pre-specified level. Even in those cases, it would seem to us to be more prudent to use all of the information in the probability distribution rather than a small slice of it.
Bibliography
1. Stein J.C., S.E. Usher D. LaGattuta and J. Youngen, 2000, A Comparables Approach to Measuring Cashflow-at-Risk for Non-Financial Firms, Working Paper, National Economic Research Associates.
2. Larsen N., H. Mausser and S. Ursyasev, 2001, Algorithms for Optimization of Value-at-Risk, Research Report, University of Florida.
3. Embrechts P., 2001, Extreme Value Theory: Potential and Limitations as an Integrated Risk Management Tool, Working Paper (listed on GloriaMundi.org).
4. Basak S. and A. Shapiro, 2001, Value-at-Risk Based Management: Optimal Policies and Asset Prices, Review of Financial Studies, v14, 371-405.
5. Ju X. and N.D. Pearson, 1998, Using Value-at-Risk to Control Risk Taking: How wrong can you be?, Working Paper, University of Illinois at Urbana-Champaign.
6. Hallerback W.G. and A.J. Menkveld, 2002, Analyzing Perceived Downside Risk: the Component Value-at-Risk Framework, Working Paper.
7. Jorion P., 2002, How informative are Value-at-Risk Disclosures?, The Accounting Review, v77, 911-932.
8. Marshall Chris, and Michael Siegel, “Value at Risk: Implementing a Risk Measurement Standard.” Journal of Derivatives 4, No. 3 (1997), pp. 91-111. Different measures of Value at Risk are estimated using different software packages on the J.P. Morgan RiskMetrics data and methodology.
9. Berkowitz J. and J. O'Brien, 2002, How accurate are Value at Risk Models at Commercial Banks, Journal of Finance, v57, 1093-1111.
10. Hendricks D., 1996, Evaluation of value-at-risk models using historical data, Federal Reserve Bank of New York, Economic Policy Review, v2. 39-70.
Размещено на Allbest.ru
...Подобные документы
Модель оценки долгосрочных активов (Capital Asset Pricing Model, САРМ). Оценка доходности и риска на основе исторических данных. Выбор оптимального портфеля из рискованных активов. Риск и неопределенность денежных потоков. Расчет бета-коэффициента.
презентация [104,1 K], добавлен 30.07.2013Mathematical model of the grinding grating bending process under the action of a meat product load parabolically decreasing along the radius. Determination of the deflection of a knife blade under the action of a parabolic load of the food medium.
статья [1,3 M], добавлен 20.10.2022Estimate risk-neutral probabilities and the rational for its application. Empirical results of predictive power assessment for risk-neutral probabilities as well as their comparisons with stock-implied probabilities defined as in Samuelson and Rosenthal.
дипломная работа [549,4 K], добавлен 02.11.2015The core innovation of post-modern portfolio theory. Total variability of return. Downside risk optimization. Downside frequency, average deviation and magnitude. Main types of formulas for downside risk. Main features of the Sortino and Sharpe ratio.
реферат [213,9 K], добавлен 15.12.2012Entrepreneurial risk: the origins and essence. The classification of business risk. Economic characteristic of entrepreneurial risks an example of joint-stock company "Kazakhtelecom". The basic ways of the risks reduction. Methods for reducing the risks.
курсовая работа [374,8 K], добавлен 07.05.2013Blood Vessel Wall. Classes of Arteries. Atheromatous plaque. Risk Factors for Atherosclerosis. Age as a risk factor. Factors affecting plasma lipid levels. Additional Risk Factors. Pathogenesis of Atherosclerosis. Components of Atherosclerotic plaque.
презентация [2,1 M], добавлен 20.11.2013Analysis of factors affecting the health and human disease. Determination of the risk factors for health (Genetic Factors, State of the Environment, Medical care, living conditions). A healthy lifestyle is seen as the basis for disease prevention.
презентация [1,8 M], добавлен 24.05.2012Commercial banks as the main segment market economy. Principles and functions of commercial banks. Legal framework of commercial operation banks. The term "banking risks". Analysis of risks and methods of their regulation. Methods of risk management.
дипломная работа [95,2 K], добавлен 19.01.2014Поняття про систему методик Value at risk (VAR): особливості, принципи побудови, класифікаційні аспекти, методи використання. Застосування коваріаційного методу розрахунку VAR на прикладі фондової біржі ПФТС. Міжнародний досвід застосування VАR-аналізу.
курсовая работа [210,4 K], добавлен 21.01.2011Legal regulation of the activities of foreign commercial banks. Features of the Russian financial market. The role and place of foreign banks in the credit and stock market. Services of foreign banks in the financial market on the example of Raiffeisen.
дипломная работа [2,5 M], добавлен 27.10.2015The concept, types and regulation of financial institutions. Their main functions: providing insurance and loans, asset swaps market participants. Activities and basic operations of credit unions, brokerage firms, investment funds and mutual funds.
реферат [14,0 K], добавлен 01.12.2010Financial position of the "BTA Bank", prospects, business strategy, management plans and objectives. Forward-looking statements, risks, uncertainties and other factors that may cause actual results of operations; strategy and business environment.
презентация [510,7 K], добавлен 17.02.2013Concept and history of diving. The methods and techniques and tools. Safety rules for deep diving. The most beautiful places in the world, used by divers. Requirements for equipment, well-known brands in the field, the main methods of risk assessment.
презентация [350,6 K], добавлен 18.03.2015Causes of ischemic stroke. Assessment of individual risk for cardiovascular disease in humans. The development in patients of hypertension and coronary heart disease. Treatment in a modern hospital disorders biomarkers of coagulation and fibrinolysis.
статья [14,8 K], добавлен 18.04.2015History of the online payment systems. Payment service providers. Online bill payments and bank transefrs. Pros and cons for using online payment systems. Card Holder Based On Biometrics. Theft in online payment system. Online banking services, risk.
реферат [37,2 K], добавлен 26.05.2014The banks history. Origin of the word. The earliest evidence of money-changing activity. A bank as an institution that deals in money and its substitutes and provides other financial services. Types of banking institutions. Loans, checks and savings.
реферат [884,8 K], добавлен 19.04.2011Factors associated with increased risk of deformities in specialty physician. The most important factor in preventing burnout is likely to be considered meeting the need for self-actualization, which is the central concept of humanistic psychology.
презентация [75,1 K], добавлен 20.10.2014Different classification schemes for dementias. His reasons. Risk Factors for Dementia. Dementia is diagnosed by using many methods such as patient's medical and family history, physical exam, neurological evaluations, cognitive and neuropsychological.
презентация [775,8 K], добавлен 10.06.2013Формы отчетности с точки зрения портфельного подхода к анализу рисков компании. Преимущества использования показателя Value-at-Risk — инструмента активного управления рисками. Влияние хеджирования на компанию, стратегические результаты от его внедрения.
статья [52,2 K], добавлен 11.09.2010The legal framework governing the possibility of ideological choice. The Russian Constitution about the limitations of political pluralism. Criteria constitutionality of public associations. The risk of failure of tideological and political goal of power.
доклад [20,0 K], добавлен 10.02.2015