That’s a very wide ranging question. The simple answer is ‘yes’, but scalability isn’t really a feature tick-box, but a result of overall architecture. Some features of OpenGamma that will help it scale:
View Processors are multi-threaded and will parallelise dependency graph building over multiple cores
Multiple View Processors can be used for e.g. Live vs Batch workloads
View Processors dispatch computation jobs across a grid of compute nodes to execute the actual analytic calculations
Computation jobs are grouped in such as way as to minimize inter-node communication, and local on-node computation caches are used wherever possible
A shared value cache allows compute nodes to share intermediate results when not available in local value caches
Results can be pulled from the shared value cache taking strain off the compute nodes
Shared value cache can be weakly consistent without affecting results, enabling better scalability (using e.g. memcached)
In batch mode, calculation nodes can write directly to the batch database without going via a gather phase
Because all data access goes via *Source and *Master interfaces, in-memory caches of these can result in very fast performance and be automatically updated when the underlying data store is updated