Splitting Files
OpenPathSampling saves all the information about the simulation, including the coordinates and velocities of every snapshot. This makes it possible to perform many different analyses later, even analyses that hadn’t been expected before the sampling.
However, this also means that the files can be very large, and frequently we don’t need all the coordinate and velocity data. This example will show how to split the file into two: a large file with the coordinates and velocities, and a smaller file with only the information needed to run the main analysis. This allows you to copy the smaller file to a local drive and perform the analysis interactively.
This particular example extends the toy MSTIS example. It shows how to split the file, and then shows that the analysis still works.
Splitting a simulation¶
Included in this notebook:
- Split a full simulation file into trajectories and the rest
%matplotlib inline
import matplotlib.pyplot as plt
import openpathsampling as paths
import numpy as np
The optimum way to use storage depends on whether you're doing production or analysis. For analysis, you should open the file as an AnalysisStorage
object. This makes the analysis much faster.
%%time
storage = paths.AnalysisStorage("mstis.nc")
st_split = paths.Storage('mstis_strip.nc', 'w')
# st_traj = paths.Storage('mstis_traj.nc', 'w')
# st_data = paths.Storage('mstis_data.nc', 'w')
st_split.fallback = storage
# st_data.fallback = storage
Store all trajectories completely in the data file
# st_data.snapshots.save(storage.snapshots[0])
# st_traj.snapshots.save(storage.snapshots[0])
Add a single snapshot as a reference and create the appropriate stores
st_split.snapshots.save(storage.snapshots[0])
Store only shallow trajectories (empty snapshots) in the main file
fix CVs first, rest is fine
cvs = storage.cvs
q = storage.snapshots.all()
fill weak cache from stored cache. This should be fast and we can later use the weak cache (as long as q exists) to fill the cache of the data file.
%%time
_ = [cv(q) for cv in cvs]
Now that we have cached the CV values we can save the CVs in the new store. This will also set the disk cache to the new file and since the file is new this one is empty.
%%time
# this will also switch the storage cache to the new file
_ = map(st_split.cvs.save, storage.cvs)
# %%time
# # this will also switch the storage cache to the new file
# _ = map(st_data.cvs.save, storage.cvs)
if all cvs are really cached we can store snapshots now and the auto-complete will fill the CV disk store automatically when snapshots are saved. This takes a little while.
len(st_split.snapshots)
%%time
_ = map(st_split.trajectories.mention, storage.trajectories)
print len(st_split.snapshotspshots)
# %%time
# _ = map(st_data.trajectories.mention, storage.trajectories)
Fill trajectory store only with trajectories and their snapshots. We are using lots of small snapshots and these are slow in comparison to large ones. So this will also take a minute or so.
%%time
_ = map(st_traj.trajectories.save, storage.trajectories)
Finally try storing all steps from the simulation. This should contain ALL you need.
%%time
_ = map(st_data.steps.save, storage.steps)
And compare file sizes
print 'Original file:', storage.file_size_str
print 'Data file:', st_data.file_size_str
print 'Traj file:', st_traj.file_size_str
print 'So we saved about %2.0f %%' % ((1.0 - st_data.file_size / float(storage.file_size)) * 100.0)
now we do the trick and use the small data file instead of the full simulation and see if that works.
st_data.close()
st_traj.close()
storage.close()
st_data.snapshots.only_mention = True
(toy_mstis_A1_split.ipynb; toy_mstis_A1_split.py)
Analyzing a split MSTIS simulation¶
Included in this notebook:
- Opening split files and look at the data
%matplotlib inline
import matplotlib.pyplot as plt
import openpathsampling as paths
import numpy as np
%%time
storage = paths.AnalysisStorage('mstis_data.nc')
Analyze the rate with no snapshots present in the analyzed file
mstis = storage.networks.load(0)
mstis.hist_args['max_lambda'] = { 'bin_width' : 0.02, 'bin_range' : (0.0, 0.5) }
mstis.hist_args['pathlength'] = { 'bin_width' : 5, 'bin_range' : (0, 150) }
%%time
mstis.rate_matrix(storage.steps, force=True)
Move scheme analysis¶
scheme = storage.schemes[0]
scheme.move_summary(storage.steps)
Replica move history tree¶
import openpathsampling.visualize as vis
reload(vis)
from IPython.display import SVG
tree = vis.PathTree(
storage.steps[0:200],
vis.ReplicaEvolution(replica=2, accepted=False)
)
SVG(tree.svg())
decorrelated = tree.generator.decorrelated
print "We have " + str(len(decorrelated)) + " decorrelated trajectories."
Visualizing trajectories¶
from toy_plot_helpers import ToyPlot
background = ToyPlot()
background.contour_range = np.arange(-1.5, 1.0, 0.1)
background.add_pes(storage.engines[0].pes)
xval = paths.FunctionCV("xval", lambda snap : snap.xyz[0][0])
yval = paths.FunctionCV("yval", lambda snap : snap.xyz[0][1])
live_vis = paths.StepVisualizer2D(mstis, xval, yval, [-1.0, 1.0], [-1.0, 1.0])
live_vis.background = background.plot()
to make this work we need the actual snapshot coordinates! These are not present in the data file anymore so we attach the traj as a fallback. We are not using analysis storage since we do not cache anything.
storage.cvs
fallback = paths.Storage('mstis_traj.nc', 'r')
storage.fallback = fallback
live_vis.draw_samples(list(tree.samples))
(toy_mstis_A2_split_analysis.ipynb; toy_mstis_A2_split_analysis.py)