Corporate actions

Hi again,

Wanted to see how the OG platforms handle corporate actions. Like if an existing security get merged/taken over/ticker change etc how does the system handle it. What are the files to look for those features.

Also dividends and stock splits, and return calculations, is there any concept of return adjustment factors.?

Sorry again for shooting questions, and thanks again for the wonderful product.

Thanks

We currently don’t support the storage of corporate actions as part of equity data structures. What we do support is complex versioning on equity (and, in fact, all data), which means that as long as we regularly reload data, we’ll be able to pick up things like ticker changes, mergers, etc, and get the appropriate historical data when doing restatements. What this misses is the linkage between companies for mergers, etc. For this, we are planning on adding an ‘Organization Master’ system to store more complex information about organizations and the relationships between them. This would be used in both equity and credit scenarios.

One of the reasons we didn’t do corporate actions from the beginning is that internally our primary source of reference data is currently Bloomberg. We currently rely on Bloomberg to provide split and dividend adjustments of their historical time series so that return calculations make sense. The reason we went this route was that in a project I was involved in some years ago, we reconstructed clean return series by applying dividend and split adjustments ourselves using Bloomberg corporate actions data - what we found was that the data was of extremely variable quality and it was actually more accurate to use the data Bloomberg had created internally for most purposes.

Because we recognise that for some users, this is too simplistic, we do expect to add support for corp actions data structures themselves at some point but the timetable for that will be largely driven by paying customers demanding it. The other issue is access to a more appropriate data source such as CRSP, Thomson Reuters DataScope Select, etc.

Thanks Jim for the reply.

which means that as long as we regularly reload data, we’ll be able to pick up things like ticker changes, mergers, etc, and get the appropriate historical data when doing restatements.

Just to be clear on what this means from an OG platform perspective, all the data management is done on Bloomberg side and not on the OG side, so whenever we request data with an old ticker, if the mapping of the old ticker to the new ticker is done on the Bloomberg side then the data will flow over to OG, otherwise it will be a NA.

So if this correct, then data that comes from the internally stored data (Postgre SQL or HSQLDb ) will be used less since most of the time data comes from bloomberg servers and this might slow down the system, if all request are on a real time basis comes from their servers.?

Does that make sense, how will you let the request know that such and such data is requested from Bloomberg and others from our server (since bloomberg terminals have limitation on number of “hits” on the servers per day/month etc, each time same data that is refreshed will be counted as additional hits), this will limit number of securities you can analyze one time.?

Thanks again Jim for your valuable thoughts.

Thanks

Not really. We don’t recommend directly accessing Bloomberg (or any other reference data provider) directly at all for exactly the reason you give: there are usually strict usage limits. The model we recommend is to update our databases from Bloomberg on a schedule that suits your particular data. This would probably be a nightly scripts that does something like:

  1. Update all time series daily with the latest data point.
  2. Reload time series on a rotating basis (e.g. reload x% of your time series each day so after 100/x days all your series will have been refreshed)
  3. If you're using adjusted series from Bloomberg, reload the series if the corporate action schedule changes (although you have to be careful about timing and pre-announced actions that occur in the future).
We'll be releasing tools to do at least 1 and hopefully 2 with the Bloomberg module release as part of 1.0. The third part is more equity dependent and we haven't done that work yet, but writing that wouldn't actually be very hard.

Thanks Jim for detailed explanation. Thanks

Hello @jim ,

I am trying to understand the versioning concept at-least at security level.
I haven’t understood at what level the versioning is done , at Secuirty level or SecuirtyDocument level.
Basically lets say one of the companies companyName changed. Now what i expect would be to store only the diff of the change.
But then what i could make out was that , i need to clone a Security , make the change in it , clone its SecuirtyDocument , apply the new version and then give security master to index it.

Please correct me if i am wrong.

Thanks in advance.

~VM

SecurityDocuments are just a simple container for the actual security you’ve retrieved. In this case we’re talking about meta-data related to the query - for Securities it’s actually not very interesting at all. It’s the actual security itself that’s versioned. Each version has a different UniqueId, but shares the same ObjectId. We do not store the diff as the objects are relatively small and don’t typically change that often. Doing diffs would make it slow to reconstruct the state (unless we did reverse diffs, in which case it’d be slow to realise an old version).

For time series, we do store diffs (transparently, you never see them).

To be clear about your use case, you modify the Security object, put it in a new SecurityDocument object and call update or correct on the security master depending on whether you want to change the current version (update) or a past version (correct).

Hello @jim ,

I noticed a strange thing that we cant add our own custom Version (from and to) instant nor correction (from and to) dates.
On checking the code , i could find on add and update its overwriting the value we gave it.

protected D doUpdateInTransaction(final D document) {
// load old row
final D oldDoc = getCheckLatestVersion(document.getUniqueId());
// update old row
final Instant now = now();
oldDoc.setVersionToInstant(now);
updateVersionToInstant(oldDoc);
// insert new row
document.setVersionFromInstant(now);
document.setVersionToInstant(null);
document.setCorrectionFromInstant(now);
document.setCorrectionToInstant(null);
document.setUniqueId(oldDoc.getUniqueId().toLatest());
mergeNonUpdatedFields(document, oldDoc);
insert(document);
return document;
}

So does this means inserting historical information about a security is not supported ?
One more question on the same context - What is the difference between Version and Correction dates , i can see both are almost treated equally in the code.

I am using Branch - demo/20120201

Thanks
VIneeth

I’ve passed on this question to one of my colleagues who will follow up.

The version-correction infrastructure is currently designed to be similar to a version control system. The master has full control of the version/correction instants, and they are always set to the current instant.

The purpose of this is to ensure that any given report/view can be run as though the current date was in the past. Achieving this requires that history in the database is never changed (like a version control system). To achieve this, we currently only allow data to be inserted as of the current instant.

Versions are the primary unit of history. As new data appears (such as a new version of a security), you will write a new version of the document. Corrections are the secondary unit of history. If you find out later that data you had previously stored was incorrect, you will write a correction to that version (the correction is a new record and does not delete the original). Normally you will view the data with all the corrections applied, but it is possible to view the data only corrected up to a specific date in the past, allowing you to exactly replicate the state of the world the application saw then.

In your case, you want to load versions as though you had created them in the past. This can be achieved, but indirectly. Each master has a method setTimeSource(TimeSource) that allows you to change the current instant of the master. In your loader you do the following psuedo-code:

foreach (document in listYouWantToStore) {
master.setTimeSource(TimeSource.fixed(versionInstant))
master.add(document)
}

This has the effect of temporarily changing the current time of the master while you write each document, allowing you to add records in the past. Make sure you only do this on a master that is running locally rather than one on a shared server!

Hello @stephen ,

Interesting design.
Can i know the design goal behind making timesource parameter configurable to the programmer.
Wont this have side effects , like if someone else is retrieving security information on the current instant , and i am changing the TimeSource as told above , wont the other person get some old record ?
Also once i am done , i need to revert the timesource to the old one right…

Thanks
Vineeth

The TimeSource parameter is not really intended for this purpose - it should only be set once at construction and never changed. However, right now it is the only way to achieve your goals. As I noted and you picked up on, changing a master’s TimeSource would have negative consequences for other users, so the master must be only used by the loading tool. ie. create a new DBSecurityMaster or similar solely for this purpose. Longer term, we will need to design a better solution to this use case.

Thanks for the explanation @stephen .
There was something i left out to ask.
Is it guaranteed that for a time frame only 1 version exist (except in the case when correction happens)?
That is , if i manually “update” another version whose timeframe overlaps with another existing one , will OG throw an exception or something ?

But then before asking all those questions , i need to confirm if this concept of versioning was brought in for this purpose or is it a more wider use case implementation feature :slight_smile:

Thanks
Vineeth

Versioning is a core OpenGamma feature, designed to ensure that a report can always be recreated.

You can only add new versions after all existing versions (without writing manual SQL). So, the only way to change a historic version is by correcting it. So, yes, for any given instant, there is only one version of the document (ignoring corrections). The database masters validate these rules.

Just for the record , i could achieve the same without the timesource feature.

SecurityDocumentTime code -> https://gist.github.com/2041341

  SecurityDocumentTime doc = new SecurityDocumentTime(security);
  Instant start = Instant.ofEpochMillis(1000000);
  Instant end = Instant.ofEpochMillis(2000000);
  doc.setRealVersionFromInstant(start);
  secMaster.add(doc);
  security.setCompanyName(security.getCompanyName() + "_new");
  start = end;  
  doc = new SecurityDocumentTime(security);
  doc.setRealVersionFromInstant(start);
  secMaster.update(doc);

And as i use this only for loading , i didn’t find any side effects.

That approach probably works today, although we wouldn’t guarantee it keeps on working. It does mean that the correction instant will be now, which may or may not be fine depending on your use case.

@stephen - though the approach you had mentioned is workable solution , i feel there should be provision to handle such scenarios without side effect. I feel the user should have provision to mention the start date of the new version.

If you also feel the same , kindly open an issue.

Thanks
Vineeth

http://jira.opengamma.com/browse/PLAT-2040