pyneuroml.pynml module#

Python wrapper around jnml command. (Thanks to Werner van Geit for an initial version of a python wrapper for jnml.)

For convenience and backward compatibility, this also includes various helper/utility methods that are defined in other modules in the package. But please try and use these from their defined locations as these imports will gradually be removed from here in the future.

pyneuroml.pynml.cell_info(cell: Cell) str#

Provide information on a NeuroML Cell instance:

  • morphological information:

    • Segment information:

      • parent segments

      • segment location, extents, diameter

      • segment length

      • segment surface area

      • segment volume

    • Segment group information:

      • included segments

  • biophysical properties:

    • channel densities

    • specific capacitances

Parameters:

cell (Cell) – cell object to investigate

Returns:

string of cell information

pyneuroml.pynml.cells_info(nml_file_name: str) str#

Provide information about the cells in a NeuroML file.

Parameters:

nml_file_name (str) – name of NeuroML v2 file

Returns:

information on cells (str)

pyneuroml.pynml.confirm_file_exists(filename: str) None#

Check if a file exists, exit if it does not.

Parameters:

filename (str) – the filename to check

pyneuroml.pynml.confirm_lems_file(filename: str) None#

Confirm that file exists and is a LEMS file before proceeding with processing.

Parameters:

filename (list of strings) – Names of files to check

pyneuroml.pynml.confirm_neuroml_file(filename: str) None#

Confirm that file exists and is a NeuroML file before proceeding with processing.

Parameters:

filename (str) – Names of files to check

pyneuroml.pynml.convert_to_swc(nml_file_name, add_comments=False, target_dir=None)#

Find all <cell> elements and create one SWC file for each

pyneuroml.pynml.convert_to_units(nml2_quantity: str, unit: str) float#

Convert a NeuroML2 quantity to provided unit.

Parameters:
  • nml2_quantity (str) – NeuroML2 quantity to convert

  • unit (str) – unit to convert to

Returns:

converted value (float)

pyneuroml.pynml.evaluate_component(comp_type, req_variables={}, parameter_values={})#

Work in progress: expand a (simple) ComponentType and evaluate an instance of it by giving parameters & required variables Used in MOOSE NeuroML reader…

pyneuroml.pynml.execute_command_in_dir(command: str, directory: str, verbose: bool = False, prefix: str = 'Output: ', env: Mapping | None = None) Tuple[int, str]#

Execute a command in specific working directory

Parameters:
  • command (str) – command to run

  • directory (str) – directory to run command in

  • verbose (bool) – toggle verbose output

  • prefix (str) – string to prefix console output with

  • env (Mapping) – environment variables to be used

pyneuroml.pynml.execute_command_in_dir_with_realtime_output(command: str, directory: str, verbose: bool = False, prefix: str = 'Output: ', env: str | None = None) bool#

Run a command in a given directory with real time output.

NOTE: this has only been tested on Linux.

Parameters:
  • command (str) – command to run

  • directory (str) – directory to run command in

  • verbose (bool) – toggle verbose output

  • prefix (str) – string to prefix output with

  • env (str) – environment variables to be used

pyneuroml.pynml.execute_multiple_in_dir(num_parallel: int | None, cmds_spec: List[Dict[Any, Any]]) List[Tuple[int, str]]#

Wraper around execute_command_in_dir to allow running commands in parallel using ppft

Parameters:
  • num_parallel (None or int) – number of simulations to run in parallel, if None, ppft will auto-detect

  • cmds_spec

    list with keyword arguments to execute_command_in_dir

    [
        {
            "kwarg1": value
        }
    ]
    

Returns:

list of tuples returned from execute_command_in_dir

Return type:

list

pyneuroml.pynml.extract_lems_definition_files(path: str | None | TemporaryDirectory = None) str#

Extract the NeuroML2 LEMS definition files to a directory and return its path.

This function can be used by other LEMS related functions that need to include the NeuroML2 LEMS definitions.

If a path is provided, the folder is created relative to the current working directory.

If no path is provided, for repeated usage for example, the files are extracted to a temporary directory using Python’s tempfile.mkdtemp function.

Note: in both cases, it is the user’s responsibility to remove the created directory when it is no longer required, for example using. the shutil.rmtree() Python function.

Parameters:

path (str or None) – path of directory relative to current working directory to extract to, or None

Returns:

directory path

pyneuroml.pynml.generate_interactive_plot(*args, **kwargs)#
pyneuroml.pynml.generate_lemsgraph(lems_file_name: str, verbose_generate: bool = True) bool#

Generate LEMS graph using jNeuroML

Parameters:
  • lems_file_name (str) – LEMS file to parse

  • verbose_generate (bool) – whether or not jnml should be run with verbosity output

Returns bool:

True of jnml ran without errors, exits without a return if jnml fails

pyneuroml.pynml.generate_nmlgraph(nml2_file_name: str, level: int = 1, engine: str = 'dot', **kwargs) None#

Generate NeuroML graph.

Parameters:
  • nml2_file_name (str) – NML file to parse

  • level (int) – level of graph to generate (default: ‘1’)

  • engine (str) – graph engine to use (default: ‘dot’)

  • kwargs – other key word agruments to pass to GraphVizHandler See the GraphVizHandler in NeuroMLlite for information on permissible arguments: NeuroML/NeuroMLlite

pyneuroml.pynml.generate_plot(*args, **kwargs)#
pyneuroml.pynml.generate_sim_scripts_in_folder(engine: str, lems_file_name: str, root_dir: str | None = None, run_dir: str | None = None, generated_files_dir_name: str | None = None, *engine_args: Any, **engine_kwargs: Any) str#

Generate simulation scripts in a new folder.

This method copies the model files and generates the simulation engine specific files (runner script for NEURON and mod files, for example) for the provided engine in a new folder. This is useful when running simulations on remote systems like a cluster or NSG which may not have the necessary dependencies installed to generate these scripts. One can then copy the folder to the remote system and run simulations there.

While copying the model files is not compulsory, we do it to ensure that there’s a clear correspondence between the set of model files and the generated simulation files generated from them. This is also allows easy inspection of model files for debugging.

Added in version 1.0.14.

Parameters:
  • engine (str) – name of engine: suffixes of the run_lems_with functions

  • lems_file_name (str) – name of LEMS simulation file

  • root_dir (str) – directory in which LEMS simulation file lives Any included files must be relative to this main directory

  • run_dir (str) –

    directory in which model files are copied and backend specific files are generated.

    By default, this is the directory that the command is run from (“.”)

    It is good practice to separate directories where simulations are run from the source of the model/simulations.

  • generated_files_dir_name (str) – name of folder to move generated files to if not provided, a _generated suffix is added to the main directory that is created

  • engine_args – positional args to be passed to the engine runner function

  • engine_kwargs – keyword args to be be passed to the engine runner function

Returns:

name of directory that was created

Return type:

str

pyneuroml.pynml.get_lems_model_with_units() Model#

Get a LEMS model with NeuroML core dimensions and units.

Returns:

a lems.model.model.Model that includes NeuroML dimensions and units.

pyneuroml.pynml.get_path_to_jnml_jar() str#

Get the path to the jNeuroML jar included with PyNeuroML.

Returns:

path of jar file

pyneuroml.pynml.get_standalone_lems_model(nml_doc_fn: str) Model#

Get the complete, expanded LEMS model.

This function takes a NeuroML2 file, includes all the NeuroML2 LEMS definitions in it and generates the complete, standalone LEMS model.

Parameters:

nml_doc_fn (str) – name of NeuroML file to expand

Returns:

complete LEMS model

pyneuroml.pynml.get_value_in_si(nml2_quantity: str) float | None#

Get value of a NeuroML2 quantity in SI units

Parameters:

nml2_quantity (str) – NeuroML2 quantity to convert

Returns:

value in SI units (float)

pyneuroml.pynml.list_exposures(nml_doc_fn: str, substring: str = '') dict[Component, List[Exposure]] | None#

List exposures in a NeuroML model document file.

This wraps around lems.model.list_exposures to list the exposures in a NeuroML2 model. The only difference between the two is that the lems.model.list_exposures function is not aware of the NeuroML2 component types (since it’s for any LEMS models in general), but this one is.

Parameters:
  • nml_doc_fn – NeuroML2 file to list exposures for

  • substring (str) – substring to match for in component names

Returns:

dictionary of components and their exposures.

The returned dictionary is of the form:

pyneuroml.pynml.list_recording_paths_for_exposures(nml_doc_fn: str, substring: str = '', target: str = '') List[str]#

List the recording path strings for exposures.

This wraps around lems.model.list_recording_paths to list the recording paths in the given NeuroML2 model. The only difference between the two is that the lems.model.list_recording_paths function is not aware of the NeuroML2 component types (since it’s for any LEMS models in general), but this one is.

Parameters:
  • nml_doc_fn – NeuroML2 file to list recording paths for

  • substring (str) – substring to match component ids against

Returns:

list of recording paths

pyneuroml.pynml.nml2_to_png(nml2_file_name: str, max_memory: str = '400M', verbose: bool = True) None#

Generate the PNG representation of a NeuroML model using jnml

Parameters:
  • nml2_file_name (str) – name of NeuroML2 file to generate PNG for

  • max_memory (str) – maximum memory allowed for use by the JVM

  • verbose (bool) – toggle whether jnml should print verbose information

pyneuroml.pynml.nml2_to_svg(nml2_file_name: str, max_memory: str = '400M', verbose: bool = True) None#

Generate the SVG representation of a NeuroML model using jnml

Parameters:
  • nml2_file_name (str) – name of NeuroML2 file to generate SVG for

  • max_memory (str) – maximum memory allowed for use by the JVM

  • verbose (bool) – toggle whether jnml should print verbose information

pyneuroml.pynml.quick_summary(nml2_doc: NeuroMLDocument) str#

Get a quick summary of the NeuroML2 document

NOTE: You should prefer nml2_doc.summary(show_includes=False)

Parameters:

nml2_doc (NeuroMLDocument) – NeuroMLDocument to fetch summary for

Returns:

summary string

pyneuroml.pynml.read_lems_file(lems_file_name: str, include_includes: bool = False, fail_on_missing_includes: bool = False, debug: bool = False) Model#

Read LEMS file using PyLEMS. See WARNING below.

WARNING: this is a general function that uses PyLEMS to read any files that are valid LEMS even if they are not valid NeuroML. Therefore, this function is not aware of the standard NeuroML LEMS definitions.

To validate NeuroML LEMS files which need to be aware of the NeuroML standard LEMS definitions, please use the validate_neuroml2_lems_file function instead.

pyneuroml.pynml.read_neuroml2_file(nml2_file_name: str, include_includes: bool = False, verbose: bool = False, already_included: list | None = None, optimized: bool = False, check_validity_pre_include: bool = False) NeuroMLDocument#

Read a NeuroML2 file into a nml.NeuroMLDocument

Parameters:
  • nml2_file_name (str) – file of NeuroML 2 file to read

  • include_includes (bool) – toggle whether files included in NML file should also be included/read

  • verbose (bool) – toggle verbosity

  • already_included (list) – list of files already included

  • optimized (bool) – toggle whether the HDF5 loader should optimise the document

  • check_validity_pre_include (bool) – check each file for validity before including

Returns:

nml.NeuroMLDocument object containing the read NeuroML file(s)

pyneuroml.pynml.reload_saved_data(lems_file_name: str, base_dir: str = '.', t_run: datetime = datetime.datetime(1900, 1, 1, 0, 0), plot: bool = False, show_plot_already: bool = True, simulator: str | None = None, reload_events: bool = False, verbose: bool = False, remove_dat_files_after_load: bool = False) dict | Tuple[dict, dict]#

Reload data saved from previous LEMS simulation run.

Parameters:
  • lems_file_name (str) – name of LEMS file that was used to generate the data

  • base_dir (str) – directory to run in

  • t_run (datetime) – time of run

  • plot (bool) – toggle plotting

  • show_plot_already (bool) – toggle if plots should be shown

  • simulator (str) – simulator that was used to generate data

  • reload_event (bool) – toggle whether events should be loaded

  • verbose (bool) – toggle verbose output

  • remove_dat_files_after_load (bool) – toggle if data files should be deleted after they’ve been loaded

TODO: remove unused vebose argument (needs checking to see if is being used in other places)

pyneuroml.pynml.reload_standard_dat_file(file_name: str) Tuple[dict, list]#

Reload a datafile as usually saved by jLEMS, etc. First column is time (in seconds), multiple other columns.

Parameters:

file_name (str) – name of data file to load

Returns:

tuple of (data, column names)

pyneuroml.pynml.run_jneuroml(pre_args: str, target_file: str, post_args: str, max_memory: str = '400M', exec_in_dir: str = '.', verbose: bool = False, report_jnml_output: bool = True, exit_on_fail: bool = False, return_string: bool = False) Tuple[bool, str] | bool#

Run jnml with provided arguments.

Parameters:
  • pre_args (list of strings) – pre-file name arguments

  • target_file (str) – LEMS or NeuroML file to run jnml on

  • max_memory (str) – maximum memory allowed for use by the JVM Note that the default value of this can be overridden using the JNML_MAX_MEMORY_LOCAL environment variable

  • exec_in_dir (str) – working directory to execute LEMS simulation in

  • verbose (bool) – toggle whether jnml should print verbose information

  • report_jnml_output (bool) – toggle whether jnml output should be printed

  • exit_on_fail (bool) – toggle whether command should exit if jnml fails

  • return_string (bool) – toggle whether the output string should be returned

Returns:

either a bool, or a Tuple (bool, str) depending on the value of return_string: True of jnml ran successfully, False if not; along with the output of the command

pyneuroml.pynml.run_jneuroml_with_realtime_output(pre_args: str, target_file: str, post_args: str, max_memory: str = '400M', exec_in_dir: str = '.', verbose: bool = False, exit_on_fail: bool = True) bool#

Run jnml with provided arguments with realtime output.

NOTE: this has only been tested on Linux.

Parameters:
  • pre_args (list of strings) – pre-file name arguments

  • target_file (str) – LEMS or NeuroML file to run jnml on

  • max_memory (bool) – maximum memory allowed for use by the JVM

  • exec_in_dir (str) – working directory to execute LEMS simulation in

  • verbose (bool) – toggle whether jnml should print verbose information

  • exit_on_fail (bool) – toggle whether command should exit if jnml fails

pyneuroml.pynml.run_lems_with(engine: str, *args: Any, **kwargs: Any)#

Run LEMS with specified engine.

Wrapper around the many run_lems_with_* methods. The engine should be the suffix, for example, to use run_lems_with_jneuroml_neuron, engine will be jneuroml_neuron.

All kwargs are passed as is to the function. Please see the individual function documentations for information on arguments.

Parameters:
  • engine (string (valid names are methods)) – engine to run with

  • args – postional arguments to pass to run function

  • kwargs – named arguments to pass to run function

Returns:

return value of called method

pyneuroml.pynml.run_lems_with_eden(lems_file_name: str, load_saved_data: bool = False, reload_events: bool = False, verbose: bool = False) bool | dict | Tuple[dict, dict]#

Run LEMS file with the EDEN simulator

Parameters:
  • lems_file_name (str) – name of LEMS file to run

  • load_saved_data (bool) – toggle whether any saved data should be loaded

  • reload_events (bool) – toggle whether events should be reloaded

  • verbose (bool) – toggle whether to print verbose information

pyneuroml.pynml.run_lems_with_jneuroml(lems_file_name: str, paths_to_include: list = [], max_memory: str = '400M', skip_run: bool = False, nogui: bool = False, load_saved_data: bool = False, reload_events: bool = False, plot: bool = False, show_plot_already: bool = True, exec_in_dir: str = '.', verbose: bool = False, exit_on_fail: bool = True, cleanup: bool = False) bool | dict | Tuple[dict, dict]#

Parse/Run a LEMS file with jnml.

Tip: set skip_run=True to only parse the LEMS file but not run the simulation.

Parameters:
  • lems_file_name (str) – name of LEMS file to run

  • paths_to_include (list(str)) – additional directory paths to include (for other NML/LEMS files, for example)

  • max_memory (bool) – maximum memory allowed for use by the JVM

  • skip_run (bool) – toggle whether run should be skipped, if skipped, file will only be parsed

  • nogui (bool) – toggle whether jnml GUI should be shown

  • load_saved_data (bool) – toggle whether any saved data should be loaded

  • reload_events (bool) – toggle whether events should be reloaded

  • plot (bool) – toggle whether specified plots should be plotted

  • show_plot_already (bool) – toggle whether prepared plots should be shown

  • exec_in_dir (str) – working directory to execute LEMS simulation in

  • verbose (bool) – toggle whether jnml should print verbose information

  • exit_on_fail (bool) – toggle whether command should exit if jnml fails

  • cleanup (bool) – toggle whether the directory should be cleaned of generated files after run completion

pyneuroml.pynml.run_lems_with_jneuroml_brian2(lems_file_name: str, paths_to_include: List[str] = [], max_memory: str = '400M', skip_run: bool = False, nogui: bool = False, load_saved_data: bool = False, reload_events: bool = False, plot: bool = False, show_plot_already: bool = True, exec_in_dir: str = '.', verbose: bool = False, exit_on_fail: bool = True, cleanup: bool = False) bool | dict | Tuple[dict, dict]#

Run LEMS file with the NEURON simulator

Tip: set skip_run=True to only parse the LEMS file but not run the simulation.

Parameters:
  • lems_file_name (str) – name of LEMS file to run

  • paths_to_include (list(str)) – additional directory paths to include (for other NML/LEMS files, for example)

  • max_memory (bool) – maximum memory allowed for use by the JVM

  • skip_run (bool) – toggle whether run should be skipped, if skipped, file will only be parsed

  • nogui (bool) – toggle whether jnml GUI should be shown

  • load_saved_data (bool) – toggle whether any saved data should be loaded

  • reload_events (bool) – toggle whether events should be reloaded

  • plot (bool) – toggle whether specified plots should be plotted

  • show_plot_already (bool) – toggle whether prepared plots should be shown

  • exec_in_dir (str) – working directory to execute LEMS simulation in

  • verbose (bool) – toggle whether jnml should print verbose information

  • exit_on_fail (bool) – toggle whether command should exit if jnml fails

  • cleanup (bool) – toggle whether the directory should be cleaned of generated files after run completion

pyneuroml.pynml.run_lems_with_jneuroml_netpyne(lems_file_name: str, paths_to_include: List[str] = [], max_memory: str = '400M', skip_run: bool = False, nogui: bool = False, num_processors: int = 1, load_saved_data: bool = False, reload_events: bool = False, plot: bool = False, show_plot_already: bool = True, exec_in_dir: str = '.', only_generate_scripts: bool = False, only_generate_json: bool = False, verbose: bool = False, exit_on_fail: bool = True, return_string: bool = False, cleanup: bool = False) bool | Tuple[bool, str] | dict | Tuple[dict, dict]#

Run LEMS file with the NEURON simulator

Tip: set skip_run=True to only parse the LEMS file but not run the simulation.

Parameters:
  • lems_file_name (str) – name of LEMS file to run

  • paths_to_include (list(str)) – additional directory paths to include (for other NML/LEMS files, for example)

  • max_memory (bool) – maximum memory allowed for use by the JVM

  • skip_run (bool) – toggle whether run should be skipped, if skipped, file will only be parsed

  • nogui (bool) – toggle whether jnml GUI should be shown

  • num_processors (int) – number of processors to use for running NetPyNE

  • load_saved_data (bool) – toggle whether any saved data should be loaded

  • reload_events (bool) – toggle whether events should be reloaded

  • plot (bool) – toggle whether specified plots should be plotted

  • show_plot_already (bool) – toggle whether prepared plots should be shown

  • exec_in_dir (str) – working directory to execute LEMS simulation in

  • only_generate_scripts (bool) – toggle whether only the runner script should be generated

  • verbose (bool) – toggle whether jnml should print verbose information

  • exit_on_fail (bool) – toggle whether command should exit if jnml fails

  • return_string (bool) – toggle whether command output string should be returned

  • cleanup (bool) – toggle whether the directory should be cleaned of generated files after run completion

Returns:

either a bool, or a Tuple (bool, str) depending on the value of return_string: True of jnml ran successfully, False if not; along with the output of the command. If load_saved_data is True, it returns a dict with the data

pyneuroml.pynml.run_lems_with_jneuroml_neuron(lems_file_name: str, paths_to_include: List[str] = [], max_memory: str = '400M', skip_run: bool = False, nogui: bool = False, load_saved_data: bool = False, reload_events: bool = False, plot: bool = False, show_plot_already: bool = True, exec_in_dir: str = '.', only_generate_scripts: bool = False, compile_mods: bool = True, verbose: bool = False, exit_on_fail: bool = True, cleanup: bool = False, realtime_output: bool = False) bool | dict | Tuple[dict, dict]#

Run LEMS file with the NEURON simulator

Tip: set skip_run=True to only parse the LEMS file but not run the simulation.

Parameters:
  • lems_file_name (str) – name of LEMS file to run

  • paths_to_include (list(str)) – additional directory paths to include (for other NML/LEMS files, for example)

  • max_memory (bool) – maximum memory allowed for use by the JVM

  • skip_run (bool) – toggle whether run should be skipped, if skipped, file will only be parsed

  • nogui (bool) – toggle whether jnml GUI should be shown

  • load_saved_data (bool) – toggle whether any saved data should be loaded

  • reload_events (bool) – toggle whether events should be reloaded

  • plot (bool) – toggle whether specified plots should be plotted

  • show_plot_already (bool) – toggle whether prepared plots should be shown

  • exec_in_dir (str) – working directory to execute LEMS simulation in

  • only_generate_scripts (bool) – toggle whether only the runner script should be generated

  • compile_mods (bool) – toggle whether generated mod files should be compiled

  • verbose (bool) – toggle whether jnml should print verbose information

  • exit_on_fail (bool) – toggle whether command should exit if jnml fails

  • cleanup (bool) – toggle whether the directory should be cleaned of generated files after run completion

  • realtime_output (bool) – toggle whether realtime output should be shown

pyneuroml.pynml.run_multiple_lems_with(num_parallel: int | None, sims_spec: Dict[Any, Any])#

Run multiple LEMS simulation files in a pool.

Uses the ppft module.

Parameters:
  • num_parallel (None or int) – number of simulations to run in parallel, if None, ppft will auto-detect

  • sims_spec (dict) –

    dictionary with simulation specifications

    Each key of the dict should be the name of the LEMS file to be simulated, and the keys will be dictionaries that contain the arguments and key word arguments to pass to the run_lems_with method:

    {
        "LEMS1.xml": {
                "engine": "name of engine",
                "args": ("arg1", "arg2"),
                "kwargs": {
                    "kwarg1": value
                }
    }
    

    Note that since the name of the simulation file and the engine are already explicitly provided, these should not be included again in the args/kwargs

Returns:

dict with results of runs, depending on given arguments:

{
    "LEMS1.xml": <results>
}

Return type:

dict

pyneuroml.pynml.split_nml2_quantity(nml2_quantity: str) Tuple[float, str]#

Split a NeuroML 2 quantity into its magnitude and units

Parameters:

nml2_quantity – NeuroML2 quantity to split

Returns:

a tuple (magnitude, unit)

pyneuroml.pynml.summary(nml2_doc: NeuroMLDocument | None = None, verbose: bool = False) None#

Wrapper around nml_doc.summary() to generate the pynml-summary command line tool.

Parameters:
  • nml2_doc (NeuroMLDocument) – NeuroMLDocument object or name of NeuroML v2 file to get summary for.

  • verbose (bool) – toggle verbosity

pyneuroml.pynml.validate_neuroml1(nml1_file_name: str, verbose_validate: bool = True, return_string: bool = False) bool | Tuple[bool, str]#

Validate a NeuroML v1 file.

NOTE: NeuroML v1 is deprecated. Please use NeuroML v2. This functionality will be dropped in the future.

Parameters:
  • nml1_file_name (str) – name of NeuroMLv1 file to validate

  • verbose_validate (bool (default: True)) – whether jnml should print verbose information while validating

  • return_string (bool) – toggle to enable or disable returning the output of the jnml validation

Returns:

Either a bool, or a tuple (bool, str): True if jnml ran without errors, false if jnml fails; along with the message returned by jnml

pyneuroml.pynml.validate_neuroml2(nml2_file_name: str, verbose_validate: bool = True, max_memory: str | None = None, return_string: bool = False) bool | Tuple[bool, str]#

Validate a NeuroML2 file using jnml.

Params nml2_file_name:

name of NeuroML 2 file to validate

Parameters:
  • verbose_validate (bool (default: True)) – whether jnml should print verbose information while validating

  • max_memory (str) – maximum memory the JVM should use while running jnml

  • return_string (bool) – toggle to enable or disable returning the output of the jnml validation

Returns:

Either a bool, or a tuple (bool, str): True if jnml ran without errors, false if jnml fails; along with the message returned by jnml

pyneuroml.pynml.validate_neuroml2_lems_file(nml2_lems_file_name: str, max_memory: str = '400M') bool#

Validate a NeuroML 2 LEMS file using jNeuroML.

Note that this uses jNeuroML and so is aware of the standard NeuroML LEMS definitions.

TODO: allow inclusion of other paths for user-defined LEMS definitions (does the -norun option allow the use of -I?)

Parameters:
  • nml2_lems_file_name (str) – name of file to validate

  • max_memory (str) – memory to use for the Java virtual machine

Returns:

True if valid, False if invalid

pyneuroml.pynml.version_info(detailed: bool = False)#

Print version information.

Parameters:

detailed (bool) – also print information about installed simulation engines

pyneuroml.pynml.write_lems_file(lems_model: Model, lems_file_name: str, validate: bool = False) None#

Write a lems_model.Model to file using pyLEMS.

Parameters:
  • lems_model (lems_model.Model) – LEMS model to write to file

  • lems_file_name (str) – name of file to write to

  • validate (bool) – toggle whether written file should be validated

pyneuroml.pynml.write_neuroml2_file(nml2_doc: NeuroMLDocument, nml2_file_name: str, validate: bool = True, verbose_validate: bool = False, hdf5: bool = False) bool | Tuple[bool, str] | None#

Write a NeuroMLDocument object to a file using libNeuroML.

Parameters:
  • nml2_doc (NeuroMLDocument) – NeuroMLDocument object to write to file

  • nml2_file_name (str) – name of file to write to

  • validate (bool) – toggle whether the written file should be validated

  • verbose_validate (bool) – toggle whether the validation should be verbose

  • hdf5 (bool) – write to HDF5 file