Finally i came to conclusion that i need to create my own component and add it to the ini file. This way , while initializing , i can get access to the already registered components.
Thanks
Vineeth
Finally i came to conclusion that i need to create my own component and add it to the ini file. This way , while initializing , i can get access to the already registered components.
Thanks
Vineeth
Hi Vineeth,
Yes, that’s certainly one of the options. My colleague Chris is enumerating the possibilities at the moment and will post a reply shortly.
Jim
Hi @vineeth
If you want a view processor running in a different process have a look at DemoViewProcessor in the OG-Integration project. That creates a remote view processor which connects to a view processor in a running server.
Chris
I have a better (but not yet complete) understanding now of how things work with the OG Engine View Processors and dependancy graphs.
Just a note to keep you informed of one direction I am currently looking at in case you have any advice or have looked at it already.
Currently I am looking at partitioning and data-affinity to see if there is a way to optomise calculations so that different scenarios try to push their processing to a remote node where calculations and base data are most likely to have already been done and stored in a local cache .
For example for a set of market risk scenarios there could be a remote USD calculation node, a remote GBP node, a remote JPY node etc and processing could try and push all calculations on USD positions to the USD node but still make use of a distributed cache as a ‘catch all’ for calculations that may need data from more than one node (lets say that a calculation for a GBP/USD cross currency swap needs data that might be associated with both the USD and the GBP nodes…this calc could be done on iether node)…some replication of data on separate nodes could be useful also.
Any pointers or ideas?
Thanks,
Neil
perhaps splitting a portfolio into sub-groups by currency (or other attr) might be one way to simplify what I have described above? then the targets are already specifying a partition (portfolio subgroup)?
There’s a fair bit of that kind of optimisation that the system will do for you with no intervention - it will try to keep common sub-calculations on the same node for example, which will tend to group calculations using the same curves onto the same nodes anyway. I’d try that and see how it performs before going down a fixed scheduling route, which, while perfectly feasible technically, is a lot less flexible.
Thanks for your answers Jim. Much appreciated. I will read a bit further through the code.
Is the optimisation done only per single dependency graph ? Would a second dependancy graph have knowledge of which remote nodes may have previously dealt with similar data (such as USD yeild curves).
If it’s within the same view cycle then yes. Each view cycle is calculated using a specific portfolio version, valuation time, and market data snapshot. If any of those vary, you wouldn’t be able to share the dependency graph anyway. As the view executes cycles forward through time, any market data that has changed between cycles triggers recalculations of the appropriate dependencies (according to some parameters you set in the view, you can e.g. make it force recalcs periodically). So if you have e.g. a yield curve that has underlying data from historical or a snapshot or something, that isn’t changing, it won’t be recalculated.