Saturday, 26 December 2015

Notes on "Herd behavior in financial markets: an experiment with financial market professionals" (Cipriani, M.; Guarino, A.)

Herd behavior in financial markets: an experiment with financial market professionals
Cipriani, M.; Guarino, A.
Journal of the European Economic Association , 7 (1) pp. 206-233. (2009)
2009-03

the extent to which trading in financial markets is characterised by herd behaviour 

herding may have both on financial markets’ stability and on the markets’ ability to achieve allocative and informational efficiency

Surveys:
Gale 1996
Hirshleifer and Tech 2003
Charley 2004
Vives 2008

To test herding models directly with data from financial markets:
sample consists of financial market professionals
the existing literature has tested for the presence of herding in a market where, according to the theory, herding should never arise
use a strategy method-like procedure that could help to detect herding behaviour directly

Treatment I: subjects should use their private information and never herd
Treatment II: herding becomes optimal because of event uncertainty

The theoretical model of Avery and Zemsky (1998)
similar to that of Glisten and Milgrom (1985) and Easley and O’Hara (1987)

An informed trader engages in cascade behaviour if he chooses the same action independently of the private signal. If the chosen action conforms to the majority of past trades the trader engages in herd behaviour. If the chosen action goes against the majority of past trades the trader engages in contrarian behaviour.


the challenge for future research is twofold. On the one hand, the existing experimental results offer suggestions for research with field data, which should study whether the behaviours observed in the laboratory are present in actual financial markets. On the other hand, more theoretical work is needed to capture the behaviour that the present model is unable to predict, such as contrarianism and abstention from trading activity.

Saturday, 19 December 2015

Notes on "Big Data: New Tricks for Econometrics" (Hal R. Varian)

Big Data: New Tricks for Econometrics
Hal R. Varian
2014, Vol.28(2), pp.3-28 [Peer Reviewed Journal]

the sheer size of the data involved may require more powerful data manipulation tools
we may have more potential predictors than appropriate for estimation, so we need to do some kind of variable selection
large datasets may allow for more flexible relationships than simple linear models

Einav and Levin 2013: new more detailed data

Sullivan 2012, Google uses many of these tools



Out-of-sample predictions:
since simpler models tend to work better for out-of-sample forecasts, machines learning experts have come up with various ways to penalise models for excessive complexity - regularisation
it is conventional to divide the data into separate sets for the purpose of training, testing, and validation.
the standard way to choose a good value for such a tuning parameter is to use k-fold cross-validation

ways to improve classifier performance:
bootstrap involves choosing a sample of size n from a dataset of size n to estimate the sampling distribution of some statistic. A variation is the “m out of n bootstrap” which draws a sample of size m from a dataset of size n>m.
Bagging involves averaging across models estimated with several different bootstrap samples in order to improve the performance of an estimator.
boosting involves repeated estimation where misclassified observations are given increasing weight in each repetition. The final estimate is then a vote or an average across the repeated estimates.

Random forests is a technique that uses multiple trees. A typical procedure uses
the following steps.
1. Choose a bootstrap sample of the observations and start to grow a tree.
2. At each node of the tree, choose a random sample of the predictors to make
the next decision. Do not prune the trees.
3. Repeat this process many times to grow a forest of trees.
4. In order to determine the classification of a new observation, have each tree make a classification and use a majority vote for the final prediction.

spike-and-slab regression, a Bayesian technique


Saturday, 12 December 2015

Notes on "Some reflections on financial fragility in banking and finance" Chick, V

Some reflections on financial fragility in banking and finance
Chick, V
8300 defect for UNSW Journal of Economic Issues , 31 (2) pp. 535-541. (1997)
1997

Keynes: investment causes saving, depends for its validity on some fraction of any new, higher level of investment being financed by the banks, because of their ability to finance in excess of saving

Problems: the amount of investment of financed by banks was probably always small, but it is probably shrinking


The system described in The general Theory, with all its threatening emphasis on uncertainty and instability, is a picnic by comparison.

Sunday, 6 December 2015

Notes on "The Minimum Economic Dividend for Joining a Currency" (Union Zorzi, Michele Ca’ ; De Santis, Roberto A. ; Zampolli, Fabrizio)

The Minimum Economic Dividend for Joining a Currency Union
Zorzi, Michele Ca’ ; De Santis, Roberto A. ; Zampolli, Fabrizio
8300 defect for UNSW German Economic Review, 2012, Vol.13(2), pp.127-141 [Peer Reviewed Journal]

how the optimality of a currency union depends on whether it brings an economic dividend in terms of potential growth and the Balassa–Samuelson (BS) effect

Kenen 1969; McKinnon, 1963; Mundell, 1961:
the theory states that the benefits from lower transaction costs and greater price transparency must outweigh the cost of giving up an independent monetary policy and flexible nominal exchange rates

The traditional theory, however, focused mainly on the factors affecting the cost of renouncing to an autonomous stabilisation policy. It did not give prominence to the benefits of a common currency over and above transaction cost savings and greater price transparency, nor on the fact that many of the conditions cited in favour or against the creation of a common currency are not static but endogenous to the policy regime.

Our approach offers the advantage that is conceptually simple and being able to reconcile old and new arguments in favour and against monetary union in a unified framework. 

The main contribution of this paper is to show in a unified analytical two-sector, two-country general equilibrium model, how the optimality conditions for forming a currency union depend on both the increase in potential output and BS effect.

The pros include the removal of the distortionary effects on potential output arising from currency risk and output fluctuation. The cons include the emergence of cross-country inflation differentials due to large intersectoral productivity gaps and the reduced macroeconomic stabilisation in an environment of supply and real exchange rate shocks.


The results suggest that both the BS effect and the size of real exchange rate shocks matter to evaluate the optimality of accessing a currency union.

Monday, 30 November 2015

Notes on "what does independence mean to central banks"

What does independence mean to central banks?
http://www.bankofengland.co.uk/education/Documents/ccbs/publications/pdf/mpfagc/section6.pdf

by far the most important factor by which most central banks define independence is the capacity to set instruments and operating procedures; 80% of central banks across a broad range of economies mentioned this in their responses

the effectiveness of formal arrangements providing central banks with instrument independence may, however, be undermined by a number of factors that are represented by bars 

the 38% of respondents who defined independence by relating it to the central bank’s statutory objectives generally fall into two categories:
  1. central banks’ mandate and statutory objectives 
  2. statutory objectives central banks in money and exchange rate targeting countries

government has a role in setting the exchange rate target, yet the lack of freedom to set an exchange rate target does not appear to influence how a central bank defines independence

based on survey results:

  1. central banks define independence as an absence of factors that constrain their ability to set instruments in pursuit of objectives
  2. the results throw interesting light on ‘goal independence’. The ability to set targets independently of the government was not generally considered to be important in countries targeting inflation in low-inflation economies. For disinflating countries, however, it has proved harder to devise clear ‘instrument-independent’ relationships between central bank and government based on inflation targets, in which government sets a clear target and the central bank sets instruments to meet the target
  3. finally, the results shed some light on the capacity of measures of independence to explain performance. Posen (1998) is among those to have pointed out that cross-country measures of independence are not always good indicators of performance. Our results provide some reasons why. The factors that affect perceived central bank independence are highly diverse. They include laws, instruments, targets, and government deficit finance. And the relative importance of each of these factors may vary markedly across countries, time, and circumstances. 

Saturday, 28 November 2015

Some literatures about online finance

Detlev S. Schlichter, Paper Money Collapse: the folly of elastic banking, United States: John Wiley & Sons Inc, 2014

Online market is a market without paper money, this book sketches an image of a system without paper money. This can help me to think about what kinds of risks the online market may face and how the online market may react to some macroeconomic changes.

Mary J. Cronin (Editor), Banking and Finance on the Internet, John Wiley & Sons, Inc., 1998
The book contains many early studies about how the Internet might affect banking and finance system. We can see that it raises some concerns about potential risks. We can look back and see if we have solved the problems they raised seventeen years ago.

Dingy Chen, Hao Lou, Craig Van Slyke, Toward an Understanding of Online Lending Intentions: Evidence from a Survey in China, Communications of the Association for Information Systems, Volume 36, Article 17, 2015
This article discusses people’s intentions to lend their money online under the peer-to-peer system. The conclusion suggests the intention is based on people’s trust. The article’s limitation is it focus on the performance within China, instead of the whole world economy. However, it helps to understand how people make their decisions. From the decision making process, we can find out the risks of such lending system.

Oren Rigbi, The Effects of Usury Laws: evidence from the online loan market, The Review of Economics and Statistics, 2013
This paper discusses the effects of usury laws by collecting data from the online loan market, so we can see the effect of the information technology on the traditional loan market. In addition, we can have some ideas about if the online loan market is riskier and who benefits from the online loan market.

Amitrajeet A. Batabyal, Hamid Beladi, A model of microfinance with adverse selection, loan default, and self-financing, Agricultural Finance Review, 2010
The sizes of loans issued online are usually relatively small, the model of microfinance discussed in the article could be used to explain how people lend and borrow money online. 

Saturday, 21 November 2015

Notes on "Risk Preferences Are Not Time Preferences" (Andreoni, James; Sprenger, Charles)

Risk Preferences Are Not Time Preferences
Andreoni, James; Sprenger, Charles

American Economic Review, 2012, Vol.102(7), pp.3357-3376 [Peer Reviewed Journal]

Relevant literature:
Frederick, Loewenstein, and O’Donoghue 2002 
Allais (1953) 
Camerer (1992) 
Harless and Camerer (1994)
Starmer (2000) 
Andreoni and Sprenger (2012a) 
Halevy 2008 
Green 1987 
Machina 1989 


We construct our test using two baseline risk conditions: 
  1. a risk-free condition where all payments, both sooner and later, will be made 100 percent of the time
  2. a risky condition where, independently, sooner and later payments will be made only 50 percent of the time, with all uncertainty resolved during the experiment 

the standard discounted expected utility (DEU) model is written 
       T
U
= ∑ δt+kE[v(ct+k)
      k=0

the future is inherently risky
If individuals exhibit a preference for certainty when it is available, then present certain consumption will be favoured over future uncertain consumption. When only uncertain future consumption is considered, individuals act more closely in line with expected utility and apparent preference reversals are generated. 

Saturday, 14 November 2015

Notes on "Two targets, two instruments: Monetary and exchange rate policies in emerging market economies" (Ghosh, Atish R. ; Ostry, Jonathan D. ; Chamon, Marcos)


Two targets, two instruments: Monetary and exchange rate policies in emerging market economies 
Ghosh, Atish R. ; Ostry, Jonathan D. ; Chamon, Marcos
Journal of International Money and Finance [Peer Reviewed Journal]

Two instruments to stabilise inflation and output while attenuating disequilibrium currency movements:
the policy interest rate
sterilised foreign exchange market intervention

inflation targeting is appropriate for emerging market economies(EMEs) that lack other nominal anchors, but it should be supplemented by judicious foreign exchange intervention, especially in the face of volatile capital flows

most EME central banks - even those with formal IT frameworks - appear to care about exchange rate volatility, adjusting the policy interest rate in response to exchange rate movements and undertaking sterilised interventions:
sterilised intervention is more likely to effective in EMEs than in advanced economies

using a simple open economy model with imperfect capital mobility. while discretionary monetary policy allows the central bank to better respond to domestic and foreign shocks - and this is welfare-enhancing when the central bank cares about exchange volatility - it may also impart an inflationary bias to the economy when the central banks i contending with time consistency or credibility problems
FX intervention is fully consistent with the central bank meeting its inflation target under IT, and is welfare enhancing provided the central bank indeed penalises exchange rate volatility. the model further implies that there will be a larger gain to adding FX intervention to the toolkit when the central bank has IT framework than when it sets discretionary policies, and that the gains from adding the second instrument to an IT framework are larger than the exchange rate stabilisation gains of switching to discretionary policies.

FX intervention is more likely to play an important role when the interest rate sensitivity of capital flows with respect to the return differential is low (in the limit where that sensitivity goes to infinity and Uncovered Interest Parity holds, FX intervention will have no traction). Assuming that FX intervention is costly, it is also more likely to play am ore important role in smoothing temporary shocks, playing a relatively minor role is the case of more persistent shocks.

with the credibility of most EME central banks not yet fully established, discretionary monetary policies would not be appropriate even though they allow the central bank to better respond to shocks that ternate costly exchange rate movements. Rather, by adding FX intervention to their arsenal, EME central banks with IT frameworks can capture much of the currency stabilisation gains that discretionary policies afford—without jeopardising the hard-won credibility about their commitment to maintain low inflation. Indeed, not only is FX intervention fully consistent with inflation targeting, it may actually enhance the credibility of the central bank's inflation target.

Saturday, 7 November 2015

Notes on "Competition among differentiated health plans under adverse selection" (P. Olivella and M. Vera-Hernandez)

Competition among differentiated health plans under adverse selection 
P. Olivella and M. Vera-Hernandez
In part of Discussion Papers in Economics 03-04. Department of Economics, University College London: London, UK. (2003) 


  • the effect of horizontal differentiation in a model of competition among pre-paid health care plans in the presence of adverse selection
  • interpret horizontal differentiation as geographical differentiation which it taken as exogenous
  • profits derived from a low risk are higher than from a high risk as one of the important empirical implications of our paper. We have proven by example that there exist equilibria with cross-subsidisation, i.e., where the profits derived from high risks are negative.
  • the planner has a role to play for sure precisely when the equilibrium presents cross-subsidisation.
    • the planner can improve both types' welfare without hurting profits by forcing both firms to correct the distortion imposed on the low risks
  • allow us to give more precise predictions on the equilibrium mix of agents that each health plan attracts and on the minimal firm complexity even if transportation costs are arbitrarily small but positive
  • allow firms to locate in other points in the line other than the extremes would introduce tow possible extensions
    • a simple one where locations are exogenous but not the extremes: firms would gain market power with respect to the agents located around their closest extreme, bit loose market power with respect to the agents located around the centre of the line
    • firms choose their location simultaneously in a stage previous to competition: by changing the traveling costs from a linear to a quadratic function of distance, we believe we would obtain the maximum-differentiation result and our results would not be altered.


Sunday, 1 November 2015

Notes on "Risk Topography" (Markus K. Brunnermeier, Gary Gorton, and Arvind Krishnamurthy)

Risk Topography
Markus K. Brunnermeier, Gary Gorton, and Arvind Krishnamurthy
May 31, 2011

a risk topography outlines a data acquisition and dissemination process that informs about systemic risk.

systemic risk:

  1. cannot be detected based on measuring cash instruments
  2. typically builds up in the background before materialising in a crisis
  3. is determined by market participants' endogenous response to various shocks

Outline a system of measuring risks and liquidity on the financial sector and producing a risk topography for the economy:
·      Improve on the standard accounting paradigines on capturing the risk
·      Reveal risk and liquidity pockets on the economy
·      Improve on current macroeconomic models that for the most part do not incorporate a financial sector

Macro models with financial frictions focus on leverage and the dynamics of net worth/capital, limiting the leverage ratio, while models in finance highlight in addition the important role of liquidity
·      Bernanke,Gertler and Gilchrist (BBG) (1999)
·      Kryotaki and Moore (KM) (1997)
·      Brunnermeier and Sannikov (2010)

BBG: the net worth of the financial sector is an important state variable in driving macroeconomic phenomena. Net worth is commonly thought of as the equity capital of the financial sector. Thus, in this model, when banks take losses that deplete their equity, they increase the rates charged on loans and/or act back on lending, thuscausing a credit crunch.

KM : add an ingredient to BGG
Agents in the model have collateral that they pledge to raise funds from lenders. Since the market value of agents collateral is partly dependent on their financial health, it affects the value of capital. With high leverage, losses deplete capital more dramatically and feedback to further reducing the market value of collateral and so on.

Diamond and Dybvig (1983):
·      It is not just borrowing or leverage of the financial sector that is salient, but rather the proportion of debt that is comprised of short-term demandable deposits.
·      When the financial sector holds illiquid assets financial by short-term debt, the possibility of counterparty run behaviour emerges that can precipitate a crisis.
feedback mechanism between capital problems and liquidity problems. Triggered by run by lenders, the sector sells, the sector sells assets whose prices then reflect an illiquidity. The lower asset prices lead to losses that deplete capital, further compromising liquidity.

Brunnermeier and Pedersen(2009):
  • Interaction between funding liquidity and market liquidity for modern collateralised (wholesale) funding market.
  • Liquidity spirals and collateral runs
    • An adrerse shock heightens volatility leading to higher margins. This lower fund liquidity and forces institutions to fire-seil their assets, thus depressing market liquidity of assets and increasing volatility further.

Leverage is well-defined in simple stylized models, but is an ill-defined measure in practice.

Liquidity measurement
  •   Ignore the future risk
  •   Problem of duration
  •   Liquidity mismatch index is designed to capture the sensitivities to these kinds of issues, which were particularly captured by any current reporting system.

The choice of scenarios is critical:
  • the propagation and patterns of a crisis are similar across events. By collecting data on a core set of factors that are held constant ones time, the data can shed light on the common propagation patterns that underline all financial crisis.
  • history suggests that the tigger for crises varios from event to event. Thus at any time the regulator needs to choose factors that are informed by prevailing economic conditions.
  • in most case, particular cross-scenarios are of special interest.
  • scenarios can include events that have never happened before, that is events that are not in recorded experience.

In the spirit of Kaldor (1961), potential stylized facts concerning the interaction between the financial sector and macro could be :
1.   The risk-deltas tend to display a high coherence with more traditional measures of economic activity, such as output or hours worked. That is, the risk-deltas of different sectors tend to be positively correlated with output.
2.   The risk-deltas in financial firms rises from trough to peak, and falls from peak to trough.
3.   Risk becomes more concentrated over the cycle. This does not necessarily mean that on average risk in individual firms becomes concentrated in certain risk sectors.
4.   Real estate-related risk is the main type of risk for 1 and 2.
5.   The liquidity aggregate is countercyclical, declining as output rises.
6.   The liquidity aggregate is positively related to commercial and industrial loans.
7.   Liquidity risk is procyclical.
8.   Risk-deltas and liquidity are negatively related to the commercial Paper-Treasury Bill spread.

To model systemic risk, ideally we would like data that includes periods of extreme financial crises with large real economic fallouts. These extreme events are rare. However, these are numerous medium-size crises. These crises all reflected significant shocks to financial intermediacies and involved negligible real effects.

It is worth highlighting the commonality with and differences to extreme event analysis on general. Extreme value theory and other methods covering rare events rely critically on certain statistical assumptions. The probability distribution of outcomes deep in the tails is typically assumed. In comparison, macroeconomic modeling involves assumptions about structural parameters that govern behavior both in medium events and in tail events. such modeling is less subject to the Lucas critique. In addition, models of financial market frictions often describe behavior in terms of constraints, rather than beliefs or preferences. It constraints are tighter in extreme events, then it seems plausible that the models may better approximate behavior during such events so that a modeling exercise may perform better than a statistical exercise for extreme events or behaviors is extreme events are avoidable due to limited data.