The objective of the project is to:

  1. Build an application to implement the case study presented by Bloomberg: The methodology used to generate the Monte-Carlo Value-at-Risk.

  2. Port the solution on Microsoft Compute Cluster Sever to evaluate and study the results.

The objective of the V@R application is to simulate prices for given stocks at time t+1 given stock prices up to time t. This distribution is represented as a set of ‘x' possible values called Simulated Prices. The input to this calculation is historical prices of the stocks. Based on the trailing year's worth of price, the sheet calculates the variance and covariance of returns. Based on variance and correlation of returns, it then calculates ‘x' equally probable sets of prices for this group of stocks on day forward. Each of these sets of prices is one scenario in our Monte-Carlo simulation.

The returns are calculated geometrically. If we represent the price today as p_t , the price yesterday as p_t-1 , and the return between yesterday and today as r_t , we can use

 to get today's price from yesterday's price and today's return, and thus 

In the financial context, volatility is synonymous with Standard Deviation. In this case it refers to the standard deviation of returns. The variance covariance matrix represents the variance and covariance of the stock returns. Over short periods of time, the volatility of returns dominates the mean of returns. We generate ‘x' sets of Gaussian Normal Random Numbers each with a mean of 0 and standard deviation of 1. 

We get the Cholesky decomposition matrix from the variance covariance matrix. The simulated prices are calculated for tomorrow using our simulation of tomorrow's returns and today's prices ().

We use the Black-Scholes-Merton equation to price call options on a stock. The current value of the stock option is calculated and then the value of the stock option is calculated as if it were aged to tomorrow. The Profit and Loss calculations and the Var calculations are then performed to get the final result.

Comet is a decentralized (peer-to-peer) computational infrastructure that extends Desktop Grid environments to support applications that have high computational requirement. It provides a decentralized and scalable tuple space, efficient communication and coordination support, and application-level abstractions that can be used to implement Desktop Grid applications based on parallel asynchronous iterative algorithms using the master-worker/BOT paradigm. The tuple space is essentially a global virtual shared-space constructed from the semantic information space used by entities for coordination and communication. This information space is deterministically mapped, using a locality-preserving mapping, onto the dynamic set of peer nodes in the Grid system. The resulting structure is a locality preserving semantic distributed hash table (DHT) built on top of a self-organizing structured overlay.

Figure: A schematic overview of the CometG system architecture.

The communication layer provides an associative communication service and guarantees that content-based messages, specified using flexible content descriptors, are served with bounded cost. This layer also provides a direct communication channel to efficiently support large volume data transfers between peer nodes. The communication channel is implemented using a thread pool mechanism and TCP/IP sockets. The coordination layer provides the Linda-like shared-space coordination interfaces: (i) Out(ts, t): a non-blocking operation that inserts tuple t into space ts. (ii) In(ts,t, timeout):a blocking operation that removes a tuple t matching template t from the space ts and returns it. If no matching tuple is found, the calling process blocks until a matching tuple is inserted or the specified timeout expires. In the latter case, null is returned. (ii) Rd(ts, t, timeout): a blocking operation that returns a tuple t matching template t from the space ts. If no matching tuple is found, the calling process blocks until a matching tuple is inserted or the specified timeout expires. In the latter case, null is returned. This method performs exactly like the ‘In' operation except that the tuple is not removed from the space.

Process/Procedure

VAR Application

The application to compute Value-at-Risk reports is built in Java and is designed to parallelize the computations for different portfolios by using the Comet G framework on Microsoft Compute Cluster Server. The methodology used to generate the Value-at-Risk report is Monte-Carlo. Thus the input data could be divided into independent parallel tasks and processed over the compute nodes to obtain faster results. There are two sets of task generated in the application implemented. First set to get the simulated returns to calculate the simulated prices and the second set to do the profit and loss and V@R calculations. The task sends the data in the form of serialized bytes. The workers de-serializes the data to do the computation required namely calculation of simulated returns from the matrix multiplication of the random numbers generated for a given task size and the Cholesky Decomposition Matrix, while the second set of calculations is the application of the Black Scholes equation to calculate the profit and loss of the different stocks for the calculated set of simulated prices. A typical out task looks as shown below . The xml is then converted into the tuple object and the data to be sent to the worker is stored in the data attribute of the tuple.

<VarAppTask>
<TaskId> count </TaskId>
<DataBlock> tasks </DataBlock>
<MasterId> Integer. toString ( MasterId ) </MasterId> <MasterNetName> master . MasterNetName </MasterNetName>
</VarAppTask>

The worker uses the ‘in' function to query for the task in the space and read the tuple to do the required calculations.

The flowchart of the Application code design is shown below:

 

Microsoft Compute Cluster

We have a cluster of 9 machines (1 headnode and 8 compute nodes) on which the Microsoft Compute Cluster pack is deployed.

We ran different tests to evaluate the performance of the application on the MS-CCS 2008.

The tests were based on the following criteria

  1. Different number of machine (min 1, max 8)

  2. Different number of simulations (min 1000, max 6000)

Different task sizes i.e. number rows of the total simulations used for calculations at a time(min 50 , max 500)

Results & Conclusions

The application was run for different number of simulations (1000, 5000 and 6000) and on multiple machines to study the total application times and the time for ‘in' and ‘out' processes.

Following results were obtained:

Graphs for total Application Times

From the above graphs, it is evident that as the task size increases the total application time reduces. This is because the input data is sent in bigger packets and so the communication time reduces. Also it can be observed that the application times for multiple machines is lesser as compared to that for a single machine for 5000 and 6000 simulations. This is because when we increase the simulations, the actual number of tasks increases. The multiple machines become optimum for higher number of tasks. This would be evident from the following graph which compares the application times for different simulations for the common task size 200. We also see from the above graph that the application time is lesser for higher number of simulations when the number of machines is between 3 and 6. This means that even for higher number of simulations the total application time is lesser with more number of machines.

Graphs for Time for “OUT”

 

Time for OUT increases as the number of simulations increase and as the task size increase. This is because the amount of work done increases.

 

Graphs for Time for “IN”

 

Time for ‘IN' increases as the number of machines increase. This is because as the number of machines increase the communication overhead of querying the tuple from tuple space increases. On a single machine since all the ‘IN' go serially the times show a high value.

Future Work

The results presented above are the initial runs of the application on MS-CCS. These results show that the Compute cluster pack solution definitely aids in improving the deployment and performance of a distributed application.

In the future we plan to

  1. Conduct large runs on more number of machines.

  2. Exploit the multiprocessor capacity of the cluster machines for faster calculations.

  3. Explore and exploit the Direct Connect Network communication provided with CCS.

  4. Use the Hyper – V for migration of masters and workers and sandboxing the Comet framework.

References

http://technet.microsoft.com/en-us/library/cc720163.aspxoft.com/en-us/library/cc720163.aspx

Document given by Bloomberg

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer