Source code documentation

Framework

omfit_tree.reload_python(moduleName, quiet=False)[source]

This function extends the Python builtin reload function to easily reload OMFIT classes

>>> reload_python(omfit_classes.omfit_onetwo)
Parameters
  • moduleName – module or module name

  • quiet – bool Suppress print statements listing reloaded objects

class omfit_tree.OMFITmaintree(*args, **kwargs)[source]

Bases: omfit_classes.omfit_base.OMFITproject

Parameters
  • filename – ‘directory/bla/OMFITsave.txt’ or ‘directory/bla.zip’ where the OMFITtree will be saved (if ‘’ it will be saved in the same folder of the parent OMFITtree)

  • only – list of strings used to load only some of the branches from the tree (eg. [“[‘MainSettings’]”,”[‘myModule’][‘SCRIPTS’]”]

  • modifyOriginal – by default OMFIT will save a copy and then overwrite previous save only if successful. If modifyOriginal=True and filename is not .zip, will write data directly at destination, which will be faster but comes with the risk of deleting a good save if the new save fails for some reason

  • readOnly – will place entry in OMFITsave.txt of the parent so that this OMFITtree can be loaded, but will not save the actual content of this subtree. readOnly=True is meant to be used only after this subtree is deployed where its fileneme says it will be. Using this feature could result in much faster projects save if the content of this tree is large.

  • quiet – Verbosity level

  • developerMode – load OMFITpython objects within the tree as modifyOriginal

  • serverPicker – take server/tunnel info from MainSettings[‘SERVER’]

  • remote – access the filename in the remote directory

  • server – if specified the file will be downsync from the server

  • tunnel – access the filename via the tunnel

  • **kw – Extra keywords are passed to the SortedDict class

start(filename='')[source]
projectName()[source]
onlyRunningCopy(deletePID=False)[source]
Parameters

delete – whether to remove PID from list of running OMFIT processes (this should be done only at exit)

Returns

return True/False wether this is the only running copy of OMFIT on this computer

reset()[source]
newProvenanceID()[source]
newProjectID()[source]
addMainSettings(updateUserSettings=False, restore='')[source]
add_bindings_to_main_settings()[source]

Take the descriptions and events from global_events_bindings and insert them in self[‘MainSettings’][‘SETUP’][‘KeyBindings’][desc] = <event>

apply_bindings()[source]

Take the descriptions and events from self[‘MainSettings’][‘SETUP’][‘KeyBindings’] and use them to update the global_event_bindings

save(quiet=None, skip_save_errors=False)[source]

Writes the content of the OMFIT tree to the filesystem using the same filename and zip options of the last saveas

Parameters
  • quiet – whether to print save progress to screen

  • skip_save_errors – skip errors when saving objects

saveas(filename, zip=None, quiet=None, skip_save_errors=False)[source]

Writes the content of the OMFIT tree to the filesystem and permanently changes the name of the project

Parameters
  • filename – project filename to save to

  • zip – whether the save should occur as a zip file

  • quiet – whether to print save progress to screen

  • skip_save_errors – skip errors when saving objects

loadModule(filename, location=None, withSubmodules=True, availableModulesList=None, checkLicense=True, developerMode=None, depth=0, quiet=False, startup_lib=None, **kw)[source]

Load a module in OMFIT

Parameters
  • filename

    • full path to the module to be loaded

    • if just the module name is provided, this will be loaded from the public modules

    • remote/branch:module format will load a module from a specific git remote and branch

    • module:remote/branch format will load a module from a specific git remote and branch

  • location – string with the location where to place the module in the OMFIT tree

  • withSubmodules – load submodules or not

  • availableModulesList – list of available modules generated by OMFIT.availableModules() If this list is not passed, then the availableModulesList is generated internally

  • checkLicense – Check license files at load

  • developerMode – Load module with developer mode option (ie. scripts loaded as modifyOriginal) if None then default behavior is set by OMFIT['MainSettings']['SETUP']['developer_mode_by_default'] Note: public OMFIT installation cannot be loaded in developer mode

  • depth – parameter used internally by for keeping track of the recursion depth

  • quiet – load modules silently or not

  • startup_lib – Used internally for executing OMFITlib_startup scripts

  • **kw – additional keywords passed to OMFITmodule() class

Returns

instance of the loaded module

load(filename, persistent_projectID=False)[source]

loads an OMFITproject in OMFIT

Parameters
  • filename – filename of the project to load (if -1 then the most recent project is loaded)

  • persistent_projectID – whether the projectID should remain the same

updateCWD()[source]
updateMainSettings()[source]
saveMainSettingsSkeleton()[source]

This utility function updates the MainSettingsSkeleton for the current OMFIT installation

availableModules(quiet=None, directories=None, same_path_as=None, force=False)[source]

Index available OMFIT modules in directories

Parameters
  • quiet – verbosity

  • directories – list of directories to index. If None this is taken from OMFIT[‘MainSettings’][‘SETUP’][‘modulesDir’]

  • same_path_as – sample OMFITsave.txt path to set directory to index

  • force – force rebuild of .modulesInfoCache file

Returns

This method returns a dictionary with the available OMFIT modules. Each element in the dictionary is a dictionary itself with the details of the available modules.

quit(deepClean=False)[source]

Cleanup current OMFIT session directory OMFITsessionDir and PID from OMFITsessionsDir Also close all SSH related tunnels and connections

Parameters

deepClean – if deepClean is True, then the OMFITtmpDir and OMFITsessionsDir get deleted

reset_connections(server=None, mds_cache=True)[source]

Reset connections, stored passwords, and MDS+ cache

Parameters

server – only reset SSH connections to this server

recentProjects(only_read=False)[source]

This routine keeps track of the recent projects and is also in charge of deleting auto-saves if they do not appear in the project list.

Parameters

read_only – only read projects file (don’t do any maintenance on it)

showExecDiag()[source]

display execution diagram

class omfit_tree.OMFITmainscratch(*args, **kwargs)[source]

Bases: omfit_classes.omfit_base.OMFITtmp

initialize()[source]
load_framework_gui(item)[source]
omfit_tree.locationFromRoot(location)[source]
Parameters

location – tree location

Returns

tree location from the closest root

omfit_tree.rootLocation(location, *args, **kw)[source]
Parameters

location – tree location

Returns

tree location of the root

omfit_tree.OMFITobject_fromS3(filename, s3bucket)[source]

Recovers OMFITobject from S3 and reloads it with the right class and original keyword parameters

Returns

object loaded from S3

class omfit_tree.OMFITfileASCII(filename, **kw)[source]

Bases: omfit_classes.omfit_ascii.OMFITascii

Use of this class is deprecated: use OMFITascii instead

class omfit_tree.OMFIT_Ufile(*args, **kwargs)[source]

Bases: omfit_classes.omfit_ufile.OMFITuFile

Use of this class is deprecated: use OMFITuFile instead

class omfit_tree.OMFITdict(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict

Use of this class is deprecated: use SortedDict instead

class omfit_tree.chebyfit(x, y, yerr, m=18)[source]

Bases: omfit_classes.utils_fit.fitCH

Use of this class is deprecated: use fitCH instead

exception omfit_tree.OMFITdependenceException[source]

Bases: RuntimeError

Use of this class is deprecated: use RuntimeError instead

omfit_classes.sortedDict.sortHuman(inStr)[source]

Sort the given list the way that humans expect

omfit_classes.sortedDict.get_bases(clss, tp=[])[source]

Returns a list of strings describing the dependencies of a class

omfit_classes.sortedDict.parseBuildLocation(inv)[source]

DEPRECATED: use parseLocation and buildLocation functions instead

Function to handle locations in the OMFIT tree (i.e. python dictionaries)

Parameters

inv – input location

Returns

  • if inv is a string, then the dictionary path is split and a list is returned (Note that this function strips the root name)

  • if it’s a list, then the dictionary path is assembled and a string is returned (Note that this function assumes that the root name is missing)

omfit_classes.sortedDict.parseLocation(inv)[source]

Parse string representation of the dictionary path and return list including root name This function can parse things like: OMFIT[‘asd’].attributes[u’aiy’ ][“[ ‘bla’][‘asa’]”][3][1:5]

Parameters

inv – string representation of the dictionary path

Returns

list of dictionary keys including rootname

omfit_classes.sortedDict.traverseLocation(inv)[source]

returns list of locations to reach input location

Parameters

inv – string representation of the dictionary path

Returns

list of locations including rootname to reach input location

omfit_classes.sortedDict.buildLocation(inv)[source]

Assemble list of keys into dictionary path string

Parameters

inv – list of dictionary keys including rootname

Returns

string representation of the dictionary path

omfit_classes.sortedDict.setLocation(location, value, globals=None, locals=None)[source]

Takes a string or a list of stirings output by parseLocation() and set the leaf to the value provided

Parameters
  • location – string or a list of stirings output by parseLocation()

  • value – value to set the leaf

  • globals – global namespace for the evaluation of the location

  • locals – local namespace for the evaluation of the location

Returns

value

omfit_classes.sortedDict.dirbaseLocation(location)[source]

Takes a string or a list of stirings output by parseLocation() and returns two strings for convenient setting of dictionary locations

>> d, b=dirbaseLocation(“OMFIT[‘dir’][‘base’]”) >> eval(d)[b] d = OMFIT[‘dir’] b = ‘base’

Parameters

location – string or a list of stirings output by parseLocation()

Returns

two string, the first one with the path leading to the leaf, the second with the name of the leaf

omfit_classes.sortedDict.traverse(self, string='', level=100, split=True, onlyDict=False, onlyLeaf=False, skipDynaLoad=False, noSubmodules=False, traverse_classes=(<class 'collections.abc.MutableMapping'>, ))[source]

Returns a string or list of strings describing the path of every entry/subentries in the dictionary

Parameters
  • string – string to be appended in front of all entries

  • level – maximum depth

  • split – split the output string into a list of strings

  • onlyDict – return only dictionary entries (can be a tuple of classes)

  • onlyLeaf – return only non-dictionary entries (can be a tuple of classes)

  • skipDynaLoad – skip entries that have .dynaLoad==True

  • noSubmodules – controls whether to traverse submodules or not

  • traverse_classes – tuple of classes to traverse

Returns

string or list of string

omfit_classes.sortedDict.treeLocation(obj, memo=None)[source]

Identifies location in the OMFIT tree of an OMFIT object

NOTE: Typical users should not need to use this function as part of their modules. If you find yourself using this function in your modules, it is likely that OMFIT already provides the functionality that you are looking for in some other way. We recommend reaching out the OMFIT developers team to see if there is an easy way to get what you want.

Parameters
  • obj – object in the OMFIT tree

  • memo – used internally to avoid infinite recursions

Returns

string with tree location

omfit_classes.sortedDict.recursiveUpdate(A, B, overwrite=True, **kw)[source]

Recursive update of dictionary A based on data from dictionary B

Parameters
  • A – dictionary A

  • B – dictionary B

  • overwrite – force overwrite of duplicates

Returns

updated dictionary

omfit_classes.sortedDict.pretty_diff(d, ptrn={})[source]

generate “human readable” dictionary output from SortedDict.diff()

omfit_classes.sortedDict.prune_mask(what, ptrn)[source]

prune dictionary structure based on mask The mask can be in the form of of a pretty_diff dictionary or a list of traverse strings

omfit_classes.sortedDict.dynaLoad(f)[source]

Decorator which calls dynaLoader method

Parameters

f – function to decorate

Returns

decorated function

omfit_classes.sortedDict.dynaLoadKey(f)[source]

Decorator which calls dynaLoad method only if key is not found

Parameters

f – function to decorate

Returns

decorated function

omfit_classes.sortedDict.dynaSave(f)[source]

Decorator which calls dynaSaver method

Parameters

f – function to decorate

Returns

decorated function

omfit_classes.sortedDict.dynaLoader(self, f=None)[source]

Call load function if object has dynaLoad attribute set to True After calling load function the dynaLoad attribute is set to False

omfit_classes.sortedDict.dynaSaver(self)[source]

This function is meant to be called in the .save() function of objects of the class OMFITobject that support dynamic loading. The idea is that if an object has not been loaded, then its file representation has not changed and the original file can be resued. This function returns True/False to say if it was successful at saving. If True, then the original .save() function can return, otherwise it should go through saving the data from memory to file.

class omfit_classes.sortedDict.SortedDict(*args, **kwargs)[source]

Bases: dict

A dictionary that keeps its keys in the order in which they’re inserted

Parameters
  • data – A dict object or list of (key,value) tuples from which to initialize the new SortedDict object

  • **kw – Optional keyword arguments given below

kw:
param caseInsensitive

(bool) If True, allows self[‘UPPER’] to yield self[‘upper’].

param sorted

(bool) If True, keep keys sorted alphabetically, instead of by insertion order.

param limit

(int) keep only the latest limit number of entries (useful for data cashes)

param dynaLoad

(bool) Not sure what this does

index(item)[source]

returns the index of the item

pop(k[, d])v, remove specified key and return the corresponding value.[source]

If key is not found, d is returned if given, otherwise KeyError is raised

popitem()(k, v), remove and return some (key, value) pair as a[source]

2-tuple; but raise KeyError if D is empty.

items()a set-like object providing a view on D’s items[source]
iteritems()[source]
keys(filter=None, matching=False)[source]

returns the sorted list of keys in the dictionary

Parameters
  • filter – regular expression for filtering keys

  • matching – boolean to indicate whether to return the keys that match (or not)

Returns

list of keys

iterkeys()[source]
values()an object providing a view on D’s values[source]
itervalues()[source]
update([E, ]**F)None.  Update D from dict/iterable E and F.[source]

If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]

recursiveUpdate(other, overwrite=False)[source]
setdefault(key, default)[source]

The method setdefault() is similar to get(), but will set dict[key]=default if key is not already in dict

Parameters
  • key – key to be accessed

  • default – default value if key does not exist

Returns

value

get(key, default)[source]

Return the value for key if key is in the dictionary, else default.

value_for_index(index)[source]

Returns the value of the item at the given zero-based index

insert(index, key, value)[source]

Inserts the key, value pair before the item with the given index

copy()[source]

Returns a copy of this object

clear()None.  Remove all items from D.[source]
moveUp(index)[source]

Shift up in key list the item at a given index

Parameters

index – index to be shifted

Returns

None

moveDown(index)[source]

Shift down in key list the item at a given index

Parameters

index – index to be shifted

Returns

None

across(what='', sort=False, returnKeys=False)[source]

Aggregate objects across the tree

Parameters
  • what – string with the regular expression to be cut across

  • sort – sorting of results alphabetically

  • returnKeys – return keys of elements in addition to objects

Returns

list of objects or tuple with with objects and keys

>> OMFIT[‘test’]=OMFITtree() >> for k in range(5): >> OMFIT[‘test’][‘aaa’+str(k)]=OMFITtree() >> OMFIT[‘test’][‘aaa’+str(k)][‘var’]=k >> OMFIT[‘test’][‘bbb’+str(k)]=-1 >> print(OMFIT[‘test’].across(“[‘aaa*’][‘var’]”))

sort(key=None, **kw)[source]
Parameters

key – function that returns a string that is used for sorting or dictionary key whose content is used for sorting

>> tmp=SortedDict() >> for k in range(5): >> tmp[k]={} >> tmp[k][‘a’]=4-k >> # by dictionary key >> tmp.sort(key=’a’) >> # or equivalently >> tmp.sort(key=lambda x:tmp[x][‘a’])

Parameters

**kw – additional keywords passed to the underlying list sort command

Returns

sorted keys

sort_class(class_order=[<class 'dict'>])[source]

sort items based on their class

Parameters

class_order – list containing order of class

Returns

sorted keys

diff(other, ignoreComments=False, ignoreContent=False, skipClasses=(), noloadClasses=(), precision=0.0, order=True, favor_my_order=False, modify_order=False, quiet=True)[source]

Comparison of a SortedDict

Parameters
  • other – other dictionary to compare to

  • ignoreComments – ignore keys that start and end with “__” (e.g. “__comment__”)

  • ignoreContent – ignore content of the objects

  • skipClasses – list of class of objects to ignore

  • noloadClasses – list of class of objects to not load

  • precision – relative precision to which the comparison is carried out

  • order – does the order of the keys matter

  • favor_my_order – favor my order of keys

  • modify_order – update order of input dictionaries based on diff

  • quiet – verbosity of the comparison

Returns

comparison dictionary

pretty_diff(other, ignoreComments=False, ignoreContent=False, skipClasses=(), noloadClasses=(), precision=0.0, order=True, favor_my_order=False, quiet=True)[source]

Comparison of a SortedDict in human readable format

Parameters
  • other – other dictionary to compare to

  • ignoreComments – ignore keys that start and end with “__” (e.g. “__comment__”)

  • ignoreContent – ignore content of the objects

  • skipClasses – list of class of objects to ignore

  • noloadClasses – list of class of objects to not load

  • precision – relative precision to which the comparison is carried out

  • order – does the order of the keys matter

  • favor_my_order – favor my order of keys

  • quiet – verbosity of the comparison

Returns

comparison dictionary in pretty to print format

diffKeys(other)[source]
Parameters

other – other dictionary to compare to

Returns

floating point to indicate the ratio of keys that are similar

changeKeysCase(case=None, recursive=False)[source]

Change all the keys in the dictionary to be upper/lower case

Parameters
  • case – ‘upper’ or ‘lower’

  • recursive – apply this recursively

Returns

None

traverse(string='', level=100, onlyDict=False, onlyLeaf=False, skipDynaLoad=False)[source]

Equivalent to the tree command in UNIX

Parameters
  • string – prepend this string

  • level – depth

  • onlyDict – only subtrees and not the leafs

Returns

list of strings containing the dictionary path to each object

walk(function, **kw)[source]

Walk every member and call a function on the keyword and value

Parameters
  • functionfunction(self,kid,**kw)

  • **kw – kw passed to the function

Returns

self

safe_del(key)[source]

Delete key entry only if it is present

Parameters

key – key to be deleted

flatten()[source]

The hierarchical structure of the dictionaries is flattened

Returns

SortedDict populated with the flattened content of the dictionary

setFlat(key, value)[source]

recursively searches dictionary for key in order to set a value raises KeyError if key could not be found, so this method cannot be used to set new entries in the dictionary

Parameters
  • key – key to be set

  • value – value to set

check_location(location, value=[])[source]

check if location exist and equals value (if value is specified)

Parameters
  • location – location as string

  • value – value for which to return equal

Returns

True/False

>> root[‘SETTINGS’].check_location(“[‘EXPERIMENT’][‘shot’]”, 133221)

todict()[source]

Return a standard Python dictionary representation of the SortedDict

Returns

dictionary

class omfit_classes.sortedDict.OMFITdataset(data_vars=None, coords=None, attrs=None)[source]

Bases: object

Subclassing from this class is like subclassing from the xarray.Dataset class but without having to deal with the hassle of inheriting from xarrays (internally this class uses class composition rather than subclassing).

Also this class makes it possible to use the OMFIT dynamic loading capabilities. All classes that subclass OMFITdataset must define the .dynaLoad attribute.

NOTE: Classes that subclass from OMFITdataset will be identified as an xarray.Dataset when using isinstance(…, xarray.Dataset) within OMFIT

Parameters
  • data_vars – see xarray.Dataset

  • coords – see xarray.Dataset

  • attrs – see xarray.Dataset

to_dataset()[source]

Return an xarray.Dataset representation of the data

Returns

xarray.Dataset

from_dataset(dataset)[source]

Create from xarray.Dataset representation of the data

omfit_classes.sortedDict.size_tree_objects(location)[source]

Returns file sizes of objects in the dictionary based on the size of their filename attribute

Parameters

location – string of the tree location to be analyzed

Returns

dictionary with locations sorted by size

omfit_classes.sortedDict.sorted_join_lists(a, b, favor_order_of_a=False, case_insensitive=False)[source]

Join two lists in a way that minimizes the distance between them and the merged list

Parameters
  • a – first list

  • b – second list

  • favor_order_of_a – favor order of list a over order of list b

  • case_insensitive – merge list in a case-insensitive way

Returns

merged list

omfit_classes.exceptions_omfit.print_last_exception(file=<_io.TextIOWrapper name='<stderr>' mode='w' encoding='UTF-8'>)[source]

This function prints the last exception that has occurred

omfit_classes.exceptions_omfit.print_stack()[source]
class omfit_classes.exceptions_omfit.doNotReportException[source]

Bases: object

Exceptions that inherit from this class will not trigger an email sent be set to the OMFIT developer team

exception omfit_classes.exceptions_omfit.EndOMFITpython(message='', *args, **kw)[source]

Bases: KeyboardInterrupt, omfit_classes.exceptions_omfit.doNotReportException

Class used to stop the running python script without reporting it to the OMFIT developer team

exception omfit_classes.exceptions_omfit.EndAllOMFITpython(message='', *args, **kw)[source]

Bases: KeyboardInterrupt, omfit_classes.exceptions_omfit.doNotReportException

Class used to stop the entire python workflow without reporting it to the OMFIT developer team

exception omfit_classes.exceptions_omfit.OMFITexception(message='', *args, **kw)[source]

Bases: Exception, omfit_classes.exceptions_omfit.doNotReportException

Class used to raise an exception in a user’s script without reporting it to the OMFIT developer team

exception omfit_classes.exceptions_omfit.ReturnCodeException[source]

Bases: RuntimeError

Class used to raise an exception when a code return code is !=0

omfit_classes.exceptions_omfit.signalHandler(signal=None, frame=None)[source]
class omfit_plot.Figure(*args, **kw)[source]

Bases: matplotlib.figure.Figure

figsize2-tuple of floats, default: :rc:`figure.figsize`

Figure dimension (width, height) in inches.

dpifloat, default: :rc:`figure.dpi`

Dots per inch.

facecolordefault: :rc:`figure.facecolor`

The figure patch facecolor.

edgecolordefault: :rc:`figure.edgecolor`

The figure patch edge color.

linewidthfloat

The linewidth of the frame (i.e. the edge linewidth of the figure patch).

frameonbool, default: :rc:`figure.frameon`

If False, suppress drawing the figure background patch.

subplotparsSubplotParams

Subplot parameters. If not given, the default subplot parameters :rc:`figure.subplot.*` are used.

tight_layoutbool or dict, default: :rc:`figure.autolayout`

If False use subplotpars. If True adjust subplot parameters using .tight_layout with default padding. When providing a dict containing the keys pad, w_pad, h_pad, and rect, the default .tight_layout paddings will be overridden.

constrained_layoutbool, default: :rc:`figure.constrained_layout.use`

If True use constrained layout to adjust positioning of plot elements. Like tight_layout, but designed to be more flexible. See /tutorials/intermediate/constrainedlayout_guide for examples. (Note: does not work with add_subplot or ~.pyplot.subplot2grid.)

colorbar(mappable, cax=None, ax=None, use_gridspec=True, **kw)[source]

Customized default grid_spec=True for consistency with tight_layout.

printlines(filename, squeeze=False)[source]

Print all data in line pyplot.plot(s) to text file.The x values will be taken from the line with the greatest number of points in the (first) axis, and other lines are interpolated if their x values do not match. Column labels are the line labels and xlabel.

Parameters

filename (str) – Path to print data to

Return type

bool

dmp(filename=None)[source]
Parameters

filename – file where to dump the h5 file

Returns

OMFITdmp object of the current figure

data_managment_plan_file(filename=None)[source]

Output the contents of the figure (self) to a hdf5 file given by filename

Parameters

filename – The path and basename of the file to save to (the extension is stripped, and ‘.h5’ is added) For the purpose of the GA data managment plan, these files can be uploaded directly to https://diii-d.gat.com/dmp

Returns

OMFITdmp object of the current figure

savefig(filename, saveDMP=True, PDFembedDMP=True, *args, **kw)[source]

Revised method to save the figure and the data to netcdf at the same time

Parameters
  • filename – filename to save to

  • saveDMP – whether to save also the DMP file. Failing to save as DMP does not raise an exception.

  • PDFembedDMP – whether to embed DMP file in PDF file (only applies if file is saved as PDF)

  • *args – arguments passed to the original matplotlib.figure.Figure.savefig function

  • **kw – kw arguments passed to the original matplotlib.figure.Figure.savefig function

Returns

Returns dmp object if save of DMP succeded. Returns None if user decided not to save DMP.

script()[source]
Returns

string with Python script to reproduce the figure (with DATA!)

OMFITpythonPlot(filename=None)[source]

generate OMFITpythonPlot script from figure (with DATA!)

Parameters

filename – filename for OMFITpythonPlot script

Returns

OMFITpythonPlot object

omfit_plot.close(which_fig)[source]

Wrapper for pyplot.close that closes FigureNotebooks when closing ‘all’

Close a figure window.

figNone or int or str or .Figure

The figure to close. There are a number of ways to specify this:

  • None: the current figure

  • .Figure: the given .Figure instance

  • int: a figure number

  • str: a figure name

  • ‘all’: all figures

class omfit_plot.savedFigure(fig)[source]

Bases: object

class omfit_plot.DraggableColorbar(cbar, mappable)[source]

Bases: object

connect()[source]

connect to all the events we need

on_press(event)[source]

on button press we will see if the mouse is over us and store some data

key_press(event)[source]
on_motion(event)[source]

on motion we will move the rect if the mouse is over us

on_release(event)[source]

on release we reset the press data

disconnect()[source]

disconnect all the stored connection ids

class omfit_plot.OMFITfigure(obj, figureButtons=True)[source]

Bases: object

addOMFITfigureToolbar(figureButtons=True)[source]
pin(event=None, fig=None, savefig=True, PDFembedDMP=True)[source]
Parameters

savefig – if False, save figure as object, if True, save figure as image.

email(event=None, fig=None, ext='PDF', saveDMP=False, PDFembedDMP=False)[source]
Parameters
  • ext – default ‘PDF’. figure format, e.g. PDF, PNG, JPG, etc.

  • saveDMP – default False, save HDF5 binary file [might have more data than shown in figure; but can be used for DIII-D Data Management Plan (DMP)]

  • PDFembedDMP – default False, embed DMP file in PDF

openPDF(event=None, fig=None, PDFembedDMP=False)[source]
help()[source]
crosshair(force=None)[source]
get(event=None)[source]
getObj(obj)[source]
selectAxes()[source]
selectPick(k)[source]
closePopup(event=None)[source]

this function closes the popup

poPopup(event)[source]

this function creates the popup which can then be populated

mvPopup(event)[source]
button_press_callback(event)[source]

when a mouse button is pressed

button_release_callback(event)[source]

when a mouse button is released

pick(event)[source]

this fucntion takes care of detecting which object was selected

closePropwindow(event=None)[source]

close the properties editing window

selectProperty(property)[source]

open the properties editing window

getProperty(property)[source]

retrieve the value of the property as seen by the user

setProperty(property, text)[source]
button_manager(event)[source]
objDelete(event=None)[source]
objCopy(event=None)[source]
objLegend(event=None)[source]
objText(event=None)[source]
objPaste(event=None)[source]
objAutoZoom(event=None, ax='')[source]
objSelect(event=None, forceDisable=False)[source]
key_press_callback(event)[source]
key_release_callback(event)[source]
static save_figure(self, saveDMP=True, PDFembedDMP=True, *args)[source]
pan(*args)[source]
zoom(*args)[source]
superzoomed = False
superzoom(event)[source]

Enlarge or restore the selected axis.

property active
class omfit_plot.FigureNotebook(nfig=0, name='', labels=[], geometry='710x710', figsize=(1, 1))[source]

Bases: object

Notebook of simple tabs containing figures.

Parameters
  • nfig – Number of initial tabs

  • name – String to display as the window title

  • labels – Sequence of labels to be used as the tab labels

  • geometry – size of the notebook

  • figsize – tuple, minimum maintained figuresize when resizing tabs

on_tab_change(event=None)[source]
email(event=None)[source]
openPDF(event=None)[source]
close()[source]

Close the FigureNotebook master window or a tab

add_figure(label='', num=None, fig=None, **fig_kwargs)[source]

Return the figure canvas for the tab with the given label, creating a new tab if that label does not yet exist. If fig is passed, then that fig is inserted into the figure.

static save_figure(self, _self, *args)[source]
subplots(nrows=1, ncols=1, label='', **subplots_kwargs)[source]

Adds a figure and axes using pyplot.subplots. Refer to pyplot.subplots for documentation

draw(ntab=None)[source]

Draw the canvas in the specified tab. None draws all.

savefig(filename='', **kw)[source]

Call savefig on each figure, with its label appended to filename

Parameters
  • filename – The fullpath+base of the filename to save

  • *kw – Passed to Figure.savefig

omfit_plot.figure(num=None, figsize=None, dpi=None, facecolor=None, edgecolor=None, frameon=True, FigureClass=<class 'omfit_plot.Figure'>, **kw)[source]
omfit_plot.colorbar(mappable=None, cax=None, ax=None, use_gridspec=True, **kw)[source]

Modified pyplot colorbar for default use_gridspec=True.

ORIGINAL DOCUMENTATION

Add a colorbar to a plot.

Function signatures for the pyplot interface; all but the first are also method signatures for the ~.Figure.colorbar method:

colorbar(**kwargs)
colorbar(mappable, **kwargs)
colorbar(mappable, cax=cax, **kwargs)
colorbar(mappable, ax=ax, **kwargs)
mappable

The matplotlib.cm.ScalarMappable (i.e., ~matplotlib.image.AxesImage, ~matplotlib.contour.ContourSet, etc.) described by this colorbar. This argument is mandatory for the .Figure.colorbar method but optional for the .pyplot.colorbar function, which sets the default to the current image.

Note that one can create a .ScalarMappable “on-the-fly” to generate colorbars not attached to a previously drawn artist, e.g.

fig.colorbar(cm.ScalarMappable(norm=norm, cmap=cmap), ax=ax)
cax~matplotlib.axes.Axes, optional

Axes into which the colorbar will be drawn.

ax~matplotlib.axes.Axes, list of Axes, optional

Parent axes from which space for a new colorbar axes will be stolen. If a list of axes is given they will all be resized to make room for the colorbar axes.

use_gridspecbool, optional

If cax is None, a new cax is created as an instance of Axes. If ax is an instance of Subplot and use_gridspec is True, cax is created as an instance of Subplot using the gridspec module.

colorbar~matplotlib.colorbar.Colorbar

See also its base class, ~matplotlib.colorbar.ColorbarBase.

Additional keyword arguments are of two kinds:

axes properties:

fractionfloat, default: 0.15

Fraction of original axes to use for colorbar.

shrinkfloat, default: 1.0

Fraction by which to multiply the size of the colorbar.

aspectfloat, default: 20

Ratio of long to short dimensions.

padfloat, default: 0.05 if vertical, 0.15 if horizontal

Fraction of original axes between colorbar and new image axes.

anchor(float, float), optional

The anchor point of the colorbar axes. Defaults to (0.0, 0.5) if vertical; (0.5, 1.0) if horizontal.

panchor(float, float), or False, optional

The anchor point of the colorbar parent axes. If False, the parent axes’ anchor will be unchanged. Defaults to (1.0, 0.5) if vertical; (0.5, 0.0) if horizontal.

colorbar properties:

Property

Description

extend

{‘neither’, ‘both’, ‘min’, ‘max’} If not ‘neither’, make pointed end(s) for out-of- range values. These are set for a given colormap using the colormap set_under and set_over methods.

extendfrac

{None, ‘auto’, length, lengths} If set to None, both the minimum and maximum triangular colorbar extensions with have a length of 5% of the interior colorbar length (this is the default setting). If set to ‘auto’, makes the triangular colorbar extensions the same lengths as the interior boxes (when spacing is set to ‘uniform’) or the same lengths as the respective adjacent interior boxes (when spacing is set to ‘proportional’). If a scalar, indicates the length of both the minimum and maximum triangular colorbar extensions as a fraction of the interior colorbar length. A two-element sequence of fractions may also be given, indicating the lengths of the minimum and maximum colorbar extensions respectively as a fraction of the interior colorbar length.

extendrect

bool If False the minimum and maximum colorbar extensions will be triangular (the default). If True the extensions will be rectangular.

spacing

{‘uniform’, ‘proportional’} Uniform spacing gives each discrete color the same space; proportional makes the space proportional to the data interval.

ticks

None or list of ticks or Locator If None, ticks are determined automatically from the input.

format

None or str or Formatter If None, ~.ticker.ScalarFormatter is used. If a format string is given, e.g., ‘%.3f’, that is used. An alternative ~.ticker.Formatter may be given instead.

drawedges

bool Whether to draw lines at color boundaries.

label

str The label on the colorbar’s long axis.

The following will probably be useful only in the context of indexed colors (that is, when the mappable has norm=NoNorm()), or other unusual circumstances.

Property

Description

boundaries

None or a sequence

values

None or a sequence which must be of length 1 less than the sequence of boundaries. For each region delimited by adjacent entries in boundaries, the color mapped to the corresponding value in values will be used.

If mappable is a ~.contour.ContourSet, its extend kwarg is included automatically.

The shrink kwarg provides a simple way to scale the colorbar with respect to the axes. Note that if cax is specified, it determines the size of the colorbar and shrink and aspect kwargs are ignored.

For more precise control, you can manually specify the positions of the axes objects in which the mappable and the colorbar are drawn. In this case, do not use any of the axes properties kwargs.

It is known that some vector graphics viewers (svg and pdf) renders white gaps between segments of the colorbar. This is due to bugs in the viewers, not Matplotlib. As a workaround, the colorbar can be rendered with overlapping segments:

cbar = colorbar()
cbar.solids.set_edgecolor("face")
draw()

However this has negative consequences in other circumstances, e.g. with semi-transparent images (alpha < 1) and colorbar extensions; therefore, this workaround is not used by default (see issue #1188).

omfit_plot.subplot(*args, **kw)[source]

Add a subplot to the current figure.

Wrapper of .Figure.add_subplot with a difference in behavior explained in the notes section.

Call signatures:

subplot(nrows, ncols, index, **kwargs)
subplot(pos, **kwargs)
subplot(**kwargs)
subplot(ax)
args : int, (int, int, *index), or .SubplotSpec, default: (1, 1, 1)

The position of the subplot described by one of

  • Three integers (nrows, ncols, index). The subplot will take the index position on a grid with nrows rows and ncols columns. index starts at 1 in the upper left corner and increases to the right. index can also be a two-tuple specifying the (first, last) indices (1-based, and including last) of the subplot, e.g., fig.add_subplot(3, 1, (1, 2)) makes a subplot that spans the upper 2/3 of the figure.

  • A 3-digit integer. The digits are interpreted as if given separately as three single-digit integers, i.e. fig.add_subplot(235) is the same as fig.add_subplot(2, 3, 5). Note that this can only be used if there are no more than 9 subplots.

  • A .SubplotSpec.

projection{None, ‘aitoff’, ‘hammer’, ‘lambert’, ‘mollweide’, ‘polar’, ‘rectilinear’, str}, optional

The projection type of the subplot (~.axes.Axes). str is the name of a custom projection, see ~matplotlib.projections. The default None results in a ‘rectilinear’ projection.

polarbool, default: False

If True, equivalent to projection=’polar’.

sharex, sharey~.axes.Axes, optional

Share the x or y ~matplotlib.axis with sharex and/or sharey. The axis will have the same limits, ticks, and scale as the axis of the shared axes.

labelstr

A label for the returned axes.

.axes.SubplotBase, or another subclass of ~.axes.Axes

The axes of the subplot. The returned axes base class depends on the projection used. It is ~.axes.Axes if rectilinear projection is used and .projections.polar.PolarAxes if polar projection is used. The returned axes is then a subplot subclass of the base class.

**kwargs

This method also takes the keyword arguments for the returned axes base class; except for the figure argument. The keyword arguments for the rectilinear base class ~.axes.Axes can be found in the following table but there might also be other keyword arguments if another projection is used.

Properties: adjustable: {‘box’, ‘datalim’} agg_filter: a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array alpha: float or None anchor: 2-tuple of floats or {‘C’, ‘SW’, ‘S’, ‘SE’, …} animated: bool aspect: {‘auto’} or num autoscale_on: bool autoscalex_on: bool autoscaley_on: bool axes_locator: Callable[[Axes, Renderer], Bbox] axisbelow: bool or ‘line’ box_aspect: None, or a number clip_box: .Bbox clip_on: bool clip_path: Patch or (Path, Transform) or None contains: unknown facecolor or fc: color figure: .Figure frame_on: bool gid: str in_layout: bool label: object navigate: bool navigate_mode: unknown path_effects: .AbstractPathEffect picker: None or bool or callable position: [left, bottom, width, height] or ~matplotlib.transforms.Bbox prop_cycle: unknown rasterization_zorder: float or None rasterized: bool or None sketch_params: (scale: float, length: float, randomness: float) snap: bool or None title: str transform: .Transform url: str visible: bool xbound: unknown xlabel: str xlim: (bottom: float, top: float) xmargin: float greater than -0.5 xscale: {“linear”, “log”, “symlog”, “logit”, …} xticklabels: unknown xticks: unknown ybound: unknown ylabel: str ylim: (bottom: float, top: float) ymargin: float greater than -0.5 yscale: {“linear”, “log”, “symlog”, “logit”, …} yticklabels: unknown yticks: unknown zorder: float

Creating a subplot will delete any pre-existing subplot that overlaps with it beyond sharing a boundary:

import matplotlib.pyplot as plt
# plot a line, implicitly creating a subplot(111)
plt.plot([1, 2, 3])
# now create a subplot which represents the top plot of a grid
# with 2 rows and 1 column. Since this subplot will overlap the
# first, the plot (and its axes) previously created, will be removed
plt.subplot(211)

If you do not want this behavior, use the .Figure.add_subplot method or the .pyplot.axes function instead.

If the figure already has a subplot with key (args, kwargs) then it will simply make that subplot current and return it. This behavior is deprecated. Meanwhile, if you do not want this behavior (i.e., you want to force the creation of a new subplot), you must use a unique set of args and kwargs. The axes label attribute has been exposed for this purpose: if you want two subplots that are otherwise identical to be added to the figure, make sure you give them unique labels.

In rare circumstances, .add_subplot may be called with a single argument, a subplot axes instance already created in the present figure but not in the figure’s list of axes.

.Figure.add_subplot .pyplot.subplots .pyplot.axes .Figure.subplots

plt.subplot(221)

# equivalent but more general
ax1=plt.subplot(2, 2, 1)

# add a subplot with no frame
ax2=plt.subplot(222, frameon=False)

# add a polar subplot
plt.subplot(223, projection='polar')

# add a red subplot that shares the x-axis with ax1
plt.subplot(224, sharex=ax1, facecolor='red')

# delete ax2 from the figure
plt.delaxes(ax2)

# add ax2 to the figure again
plt.subplot(ax2)
class omfit_plot.quickplot(x, y, ax=None)[source]

Bases: object

quickplot plots lots of data quickly

Parameters
  • x – x values

  • y – y values

  • ax – ax to plot on

It assumes the data points are dense in the x dimension compared to the screen resolution at all points in the plot. It will resize when the axes are clicked on.

get_ax_size()[source]
plot()[source]

API

omfit_classes.OMFITx.repr(value)[source]

repr modified to work for GUI functions

omfit_classes.OMFITx.repr_eval(location, preentry=None, collect=False)[source]

evaluate location and provide representation

class omfit_classes.OMFITx.GUI(pythonFile, relativeLocations, **kw)[source]

Bases: object

This class creates a new GUI. It is used internally by OMFIT when a OMFITpythonGUI object is executed

Parameters

pythonFile – OMFITpythonGUI object to be executed

Returns

None

update()[source]
omfit_classes.OMFITx.inputbox(prompt='Input box')[source]

Open a Dialog box to prompt for user input. Note: this is a blocking call.

Return input string if user chooses submit and None otherwise

omfit_classes.OMFITx.UpdateGUI(top=None)[source]

Function used to update users GUIs

Parameters

top – TopLevel tk GUI to be updated

omfit_classes.OMFITx.Refresh()[source]

Force a refresh of the OMFIT GUI by issuing a TkInter .update()

omfit_classes.OMFITx.CloseAllGUIs()[source]

Function for closing all users GUIs

omfit_classes.OMFITx.CompoundGUI(pythonFile, title=None, **kw)[source]

This method allows the creation of nested GUI.

Parameters
  • pythonFile – is meant to be an OMFITpythonGUI object in the OMFIT tree

  • title – title to appear in the compound GUI frame. If None, the location of the pythonFile object in the OMFIT tree will be shown. If an empty string, the compound GUI title is suppressed.

Returns

None

omfit_classes.OMFITx.Tab(name='')[source]

This method creates a Tab under which the successive GUI elements will be placed

Parameters

name – Name to assign to the TAB

Returns

None

omfit_classes.OMFITx.Entry(location, lbl=None, comment=None, updateGUI=False, help='', preentry=None, postcommand=None, check=[], multiline=False, norm=None, default=[], delete_if_default=False, url='', kwlabel={}, **kw)[source]

This method creates a GUI element of the entry type The background of the GUI gets colored green/red depending on whether the input by the user is a valid Python entry

Parameters
  • location – location in the OMFIT tree (notice that this is a string)

  • lbl – Label which is put on the left of the entry

  • comment – A comment which appears on top of the entry

  • updateGUI – Force a re-evaluation of the GUI script when this parameter is changed

  • help – help provided when user right-clicks on GUI element (adds GUI button)

  • preentry – function to pre-process the data at the OMFIT location to be displayed in the entry GUI element

  • postcommand – command to be executed after the value in the tree is updated. This command will receive the OMFIT location string as an input

  • check – function that returns whether what the user has entered in the entry GUI element is a valid entry. This will make the background colored yellow, and users will not be able to set the value.

  • default – Set the default value if the tree location does not exist (adds GUI button)

  • delete_if_default – Delete tree entry if the value is the default value

  • multiline – Force display of button for multiple-line text entry

  • norm – normalizes numeric variables (overrides preentry or postcommand)

  • url – open url in web-browser (adds GUI button)

  • kwlabel – keywords passed to ttk.Label

Returns

associated ttk.Entry object

omfit_classes.OMFITx.ComboBox(location, options, lbl=None, comment=None, updateGUI=False, state='readonly', help='', postcommand=None, check=[], default=[], url='', kwlabel={}, **kw)[source]

This method creates a GUI element of the combobox type. The background of the GUI gets colored green/red depending on whether the input by the user is a valid Python entry

Notice that this method can be used to set multiple entries at once: ComboBox([“root[‘asd’]”,”root[‘dsa’]”,”root[‘aaa’]”,],{‘’:[0,0,0],’a’:[1,1,0],’b’:[1,0,’***’]},’Test multi’,default=[0,0,0]) which comes very handy when complex/exclusive switch combinations need to be set in a namelist file, for example. Use the string *** to leave parameters unchanged.

Parameters
  • location – location in the OMFIT tree (notice that this is either a string or a list of strings)

  • options – possible options the user can choose from. This can be a list or a dictionary.

  • lbl – Label which is put on the left of the entry

  • comment – A comment which appears on top of the entry

  • updateGUI – Force a re-evaluation of the GUI script when this parameter is changed

  • state

    • ‘readonly’ (default) the user can not type in whatever he wants

    • ’normal’ allow user to type in

    • ’search’ allow searching for entries

  • help – help provided when user right-clicks on GUI element (adds GUI button)

  • postcommand – command to be executed after the value in the tree is updated. This command will receive the OMFIT location string as an input

  • check – function that returns whether what the user has entered in the entry GUI element is a valid entry. This will make the background colored yellow, and users will not be able to set the value.

  • default – Set the default value if the tree location does not exist (adds GUI button)

  • url – open url in web-browser (adds GUI button)

  • kwlabel – keywords passed to ttk.Label

Returns

associated TkInter combobox object

class omfit_classes.OMFITx.same_row[source]

Bases: object

Environment to place GUI elements on the same row

For example to place two buttons side by side:

>> with OMFITx.same_row(): >> OMFITx.Button(‘Run’, lambda: None) >> OMFITx.Button(‘Plot’, lambda: None)

class omfit_classes.OMFITx.same_tab(tab_name)[source]

Bases: object

Environment to place GUI elements within the same tab

For example to place two buttons in the same tab named ‘test’

>> with OMFITx.same_tab(‘test’): >> OMFITx.Button(‘Run’, lambda: None) >> OMFITx.Button(‘Plot’, lambda: None)

omfit_classes.OMFITx.Button(lbl, command, help='', url='', closeGUI=False, updateGUI=False, **kw)[source]

This method creates a GUI element of the button type

Parameters
  • lbl – the text to be written on the button

  • command – the command to be executed

  • help – help provided when user right-clicks on GUI element (adds GUI button)

  • url – open url in web-browser (adds GUI button)

  • updateGUI – Force a re-evaluation of the GUI script when this parameter is changed

  • closeGUI – Close current GUI after executing the command

  • **kw – extra keywords are passed to the ttk.Button

Returns

associated ttk.Button object

omfit_classes.OMFITx.Label(lbl, align='center', **kw)[source]

This method creates a GUI element of the label type

Parameters
  • lbl – the text to be written in the label

  • align – alignment of the text in the frame

Returns

associated ttk.Label object

omfit_classes.OMFITx.Separator(lbl=None, kwlabel={}, **kw)[source]

This method creates a TkInter separator object

Parameters
  • lbl – text to be written between separator lines

  • kwlabel – keywords passed to ttk.Label

  • **kw – keywords passed to ttk.Label

Returns

associated ttk.Label object

omfit_classes.OMFITx.FilePicker(location, lbl=None, comment=None, updateGUI=False, help='', postcommand=None, localRemote=True, transferRemoteFile=True, directory=False, action='open', tree=True, default=[], url='', kwlabel={}, init_directory_location=None, init_pattern_location=None, favorite_list_location=None, pattern_list_location=None, reveal_location=None, **kw)[source]

This method creates a GUI element of the filePicker type, which allows to pick a file/directory

Parameters
  • location – location in the OMFIT tree (notice that this is a string)

  • lbl – label to be shown near the button

  • help – help provided when user right-clicks on GUI element (adds GUI button)

  • postcommand – command to be executed after the value in the tree is updated. This command will receive the OMFIT location string as an input

  • updateGUI – Force a re-evaluation of the GUI script when this parameter is changed

  • localRemote – True: both, ‘local’: only local, ‘remote’: only remote

  • transferRemoteFile

    controls what goes into location

    • string with local filename (if transferRemoteFile==True)

    • string with the filename (if transferRemoteFile==False)

    • tuple with the filename,server,tunnel (if transferRemoteFile==None)

    if transferRemoteFile=True, then the file is transferred to a temporary folder

    if transferRemoteFile is a string, then it will be interpreted as the directory where to move the file

  • directory – whether it’s a directory or a file

  • action – ‘open’ or ‘save’

  • tree – load from OMFIT tree location

  • url – open url in web-browser (adds GUI button)

  • kwlabel – keywords passed to ttk.Label

  • init_directory_location – The contents of this location are used to set the initial directory for file searches. If a file name is specified the directory will be determined from the file name and this input ignored. Otherwise, if set this will be used to set the initial directory.

  • init_pattern_location – The default pattern is ‘*’. If this is specified then the contents of the tree location will replace the default intial pattern.

  • favorite_list_location – OMFIT tree location which contains a possibly empty list of favorite file directories. To keep with the general omfit approach this should be a string.

  • pattern_list_location – OMFIT tree location which contains a possibly empty list of favorite search patterns. To keep with the general omfit approach this should be a string.

  • reveal_location – location used for creation of the help (this is used internally by OMFIT, should not be used by users)

  • **kw – keywords passed to Entry object

Returns

associated ttk.Entry object

omfit_classes.OMFITx.ObjectPicker(location, lbl=None, objectType=None, objectKW={}, postcommand=None, unset_postcommand=None, kwlabel={}, init_directory_location=None, init_pattern_location=None, favorite_list_location=None, pattern_list_location=None, **kw)[source]

This helper method creates a GUI element of the objectPicker type, which allows to load objects in the tree.

If an object is already present at the location, then a button allows picking of a different object.

Notice that this GUI element will always call an updateGUI

Parameters
  • location – location in the OMFIT tree (notice that this is a string)

  • lbl – label to be shown near the button/object picker

  • objectType – class of the object that one wants to load (e.g. OMFITnamelist, OMFITgeqdsk, …) if objectType is None then the object selected with Tree is deepcopied

  • objectKW – keywords passed to the object

  • postcommand – command to be executed after the value in the tree is updated. This command will receive the OMFIT location string as an input.

  • unset_postcommand – command to be executed after the value in the tree is deleted. This command will receive the OMFIT location string as an input.

  • kwlabel – keywords passed to ttk.Label

  • init_directory_location – The contents of this location are used to set the initial directory for file searches. If a file name is specified the directory will be determined from the file name and this input ignored. Otherwise, if set this will be used to set the initial directory.

  • init_pattern_location – The default pattern is ‘*’. If this is specified then the contents of the tree location will replace the default intial pattern.

  • favorite_list_location – OMFIT tree location which contains a possibly empty list of favorite file directories. To keep with the general omfit approach this should be a string.

  • pattern_list_location – OMFIT tree location which contains a possibly empty list of favorite search patterns. To keep with the general omfit approach this should be a string.

  • **kw – extra keywords are pased to the FilePicker object

Returns

associated ttk.Entry object

omfit_classes.OMFITx.ModulePicker(location, modules=None, lbl=None, *args, **kw)[source]

This method creates a GUI element of the combobox type for the selection of modules within the OMFIT project.

Parameters
  • location – location in the OMFIT tree (notice that this is either a string or a list of strings)

  • modules – string or list of strings with IDs of the allowed modules. If modules is None all modules in OMFIT are listed

  • lbl – label to be shown near the combobox

  • load – list of two elements lists with module name and location where modules can be loaded eg. [[‘OMFITprofiles’,”root[‘OMFITprofiles’]”],[‘EFIT’,”OMFITmodules[-2][‘EFIT’]”],] Setting load=True will set loading of the modules as submodules

  • *args – arguments passed to OMFITx.ComboBox

  • **kw – keywords passed to OMFITx.ComboBox

Returns

returns from OMFITx.ComboBox

omfit_classes.OMFITx.TreeLocationPicker(location, lbl=None, comment=None, kwlabel={}, default=[], help='', url='', updateGUI=False, postcommand=None, check=None, base=None, **kw)[source]

This method creates a GUI element used to select a tree location The label of the GUI turns green/red if the input by the user is a valid OMFIT tree entry (non existing tree entries are allowed) The label of the GUI turns green/red if the input by the user does or doesn’t satisfy the check (non valid tree entries are NOT allowed)

Parameters
  • location – location in the OMFIT tree (notice that this is a string)

  • lbl – Label which is put on the left of the entry

  • comment – A comment which appears on top of the entry

  • kwlabel – keywords passed to ttk.Label

  • default – Set the default value if the tree location does not exist (adds GUI button)

  • help – help provided when user right-clicks on GUI element (adds GUI button)

  • url – open url in web-browser (adds GUI button)

  • updateGUI – Force a re-evaluation of the GUI script when this parameter is changed

  • postcommand – command to be executed after the value in the tree is updated. This command will receive the OMFIT location string as an input

  • check – function that returns whether what the user has entered in the entry GUI element is a valid entry This will make the label colored yellow, and users will not be able to set the value.

  • base – object in location with respect to which relative locations are evaluated

  • **kw – keywords passed to OneLineText object

Returns

associated ttk.Entry object

omfit_classes.OMFITx.CheckBox(location, lbl=None, comment=None, useInt=False, mapFalseTrue=[], updateGUI=False, help='', postcommand=None, default=[], url='', kwlabel={}, **kw)[source]

This method creates a GUI element of the checkbutton type

This method accepts a list of locations, labels and defaults

Parameters
  • location – location in the OMFIT tree (notice that this is a string)

  • lbl – Label which is put on the left of the entry

  • comment – A comment which appears on top of the entry

  • useInt – Use integers (1 or 0) instead of boolean (True or False)

  • mapFalseTrue – a 2 elements list, the first one for the unchecked, the second one for the checked button

  • updateGUI – Force a re-evaluation of the GUI script when this parameter is changed

  • help – help provided when user right-clicks on GUI element (adds GUI button)

  • postcommand – command to be executed after the value in the tree is updated. This command will receive the OMFIT location string as an input

  • default – Set the default value if the tree location does not exist (adds GUI button)

  • url – open url in web-browser (adds GUI button)

  • kwlabel – keywords passed to ttk.Label

  • **kw – extra keywords are pased to the Checkbutton object

Returns

associated TkInter checkbutton object

>>> OMFITx.CheckBox(["OMFIT['ck']","OMFIT['ck1']"],['hello','asd'],default=[False,True])
>>> OMFITx.CheckBox("OMFIT['ck']",'hello',default=False)
omfit_classes.OMFITx.ListEditor(location, options, lbl=None, default=None, unique=True, ordered=True, updateGUI=False, postcommand=None, only_valid_options=False, help='', url='', show_delete_button=False, max=None)[source]

GUI element to add or remove objects to a list Note: multiple items selection possible with the Shift and Ctrl keys

Parameters
  • location – location in the OMFIT tree (notice that this is a string).

  • options – possible options the user can choose from. This can be a tree location, a list, or a dictionary. If a dictinoary, then keys are shown in the GUI and values are set in the list. In order to use “show_delete_button”, this must be a string giving the location of a list in the tree.

  • lbl – Label which is put on the left of the entry

  • default – Set the default value if the tree location does not exist

  • unique – Do not allow repetitions in the list

  • ordered – Keep the same order as in the list of options If false, then buttons to move elements up/down are shown

  • updateGUI – Force a re-evaluation of the GUI script when this parameter is changed

  • postcommand – function to be called after a button is pushed. It is called as postcommand(location=location,button=button) where button is in [‘add’,’add_all’,’remove’,’remove_all’]

  • only_valid_options – list can only contain valid options

  • help – help provided when user right-clicks on GUI element (adds GUI button)

  • url – open url in web-browser (adds GUI button)

  • show_delete_button – bool: Show an additional button for deleting items from the left hand list

  • max – allow at most MAX choices

omfit_classes.OMFITx.Slider(location, start_stop_step, lbl=None, comment=None, digits=None, updateGUI=False, help='', preentry=None, postcommand=None, norm=None, default=[], url='', kwlabel={}, refresh_every=100, **kw)[source]

This method creates a GUI element of the slider type

Parameters
  • location – location in the OMFIT tree (notice that this is a string)

  • start_stop_step – list of tree elements with start/stop/step of the slider

  • lbl – Label which is put on the left of the entry

  • comment – A comment which appears on top of the entry

  • digits – How many digits to use (if None uses 3 digits if start_stop_step has floats or else 0 digits if these are integers)

  • updateGUI – Force a re-evaluation of the GUI script when this parameter is changed

  • help – help provided when user right-clicks on GUI element (adds GUI button)

  • preentry – function to pre-process the data at the OMFIT location to be displayed in the entry GUI element

  • postcommand – command to be executed after the value in the tree is updated. This command will receive the OMFIT location string as an input

  • default – Set the default value if the tree location does not exist (adds GUI button)

  • norm – normalizes numeric variables (overrides preentry or postcommand)

  • url – open url in web-browser (adds GUI button)

  • refresh_every – how often to call postcommand function (in ms)

  • kwlabel – keywords passed to ttk.Label

Returns

associated TtkScale object

omfit_classes.OMFITx.Lock(location, value=[], checkLock=False, clear=False)[source]

The lock method prevents users from using a GUI element which would affect a specific location in the OMFIT tree

Parameters
  • location – location in the OMFIT tree (notice that this is a string or a list of strings) If location is None, then all locks are cleared.

  • checkLock – False=set the lock | True=return the lock value

  • value – lock location at this value

  • clear – clear or set the lock

Returns

None if checkLock=False, otherwise True/False depending on value of the lock

omfit_classes.OMFITx.TitleGUI(title=None)[source]

Sets the title to the user gui window (if it’s not a compound GUI)

Parameters

title – string containing the title

Returns

None

omfit_classes.OMFITx.ShotTimeDevice(postcommand=None, showDevice=True, showShot=True, showTime=True, showRunID=False, multiShots=False, multiTimes=False, showSingleTime=False, checkDevice=None, checkShot=None, checkTime=None, checkRunID=None, subMillisecondTime=False, stopIfNotSet=True, updateGUI=True)[source]

This high level GUI allows setting of DEVICE/SHOT/TIME of each module (sets up OMFIT MainSettings if root[‘SETTINGS’][‘EXPERIMENT’][‘XXX’] is an expression)

Parameters
  • postcommand – command to be executed every time device,shot,time are changed (location is passed to postcommand)

  • showDevice – True/False show device section or list of suggested devices

  • showShot – True/False show shot section or list with list of suggested shots

  • showTime – True/False show time section or list with list of suggested times

  • showRunID – True/False show runID Entry

  • multiShots – True/False show single/multi shots

  • multiTimes – True/False show single/multi times

  • showSingleTime – True/False if multiTimes, still show single time

  • checkDevice – check if device user input satisfies condition

  • checkShot – check if shot user input satisfies condition

  • checkTime – check if time user input satisfies condition

  • checkRunID – check if runID user input satisfies condition

  • subMillisecondTime – Allow floats as times

  • stopIfNotSet – Stop GUI visualization if shot/time/device are not set

Returns

None

omfit_classes.OMFITx.CloseGUI()[source]

Function for closing the active user GUI

omfit_classes.OMFITx.End(what='single')[source]

End execution of OMFITpython script

Parameters

what

  • ‘single’ terminates the running script

  • ’all’ terminates the whole workflow

omfit_classes.OMFITx.Open(object)[source]

Open OMFITascii object in editor or OMFITweb in browser File extension behaviour can be specified in OMFIT[‘MainSettings’][‘SETUP’][‘EXTENSIONS’]

Parameters

object – OMFIT object or filename to be opened in external editor

omfit_classes.OMFITx.Figure(toolbar=True, returnFigure=False, fillX=False, **kw)[source]

Embed a matplotlib figure in an OMFIT GUI

Parameters
  • toolbar – [True] show/hide the figure toolbar

  • returnFigure – [False] function returns figure f or axis f.use_subplot(111)

  • fillX – [False] fill X dimension of screen

  • figsize – (5*2./3., 4*2./3.) figure size

  • **kw – keyword arguments passed to pyplot.Figure

omfit_classes.OMFITx.Dialog(*args, **kw)[source]

Display a dialog box and wait for user input

Parameters
  • message – the text to be written in the label

  • answers – list of possible answers

  • icon – “question”, “info”, “warning”, “error”

  • title – title of the frame

  • options – dictionary of True/False options that are displayed as checkbuttons in the dialog

  • entries – dictionary of string options that are displayed as entries in the dialog

Returns

return the answer chosen by the user (a dictionary if options keyword was passed)

omfit_classes.OMFITx.clc(tag=None)[source]
clear console (possible tags are)

INFO : forest green HIST : dark slate gray WARNING : DarkOrange2 HELP : PaleGreen4 STDERR : red3 STDOUT : black DEBUG : gold4 PROGRAM_OUT : blue PROGRAM_ERR : purple

Parameters

tag – specific tag to clear

omfit_classes.OMFITx.EditASCIIobject(location, lbl=None, comment=None, updateGUI=False, help='', postcommand=None, url='', **kw)[source]

This method creates a GUI element that edits ASCII files in the OMFIT tree Sample usage:

OMFITx.EditASCIIobject("root['INPUTS']['TRANSP']", 'edit namelist', postcommand=lambda location:eval(location).load())
Parameters
  • location – location of the ASCII OMFITobject in the OMFIT tree (notice that this is a string)

  • lbl – Label which is put on the left of the entry

  • comment – A comment which appears on top of the entry

  • updateGUI – Force a re-evaluation of the GUI script when this parameter is changed

  • help – help provided when user right-clicks on GUI element (adds GUI button)

  • postcommand – command to be executed after the value in the tree is updated. This command will receive the OMFIT location string as an input

  • url – open url in web-browser (adds GUI button)

Returns

associated ttk.Button object

class omfit_classes.OMFITx.FileDialog(directory=None, serverPicker='', server='localhost', tunnel='', pattern='*', default='', master=None, lockServer=False, focus='filterDirs', favorite_list_location=None, pattern_list_location=None, is_dir=False, title='File Browser')[source]

Bases: object

Standard remote file selection dialog – no checks on selected file.

Parameters
  • directory – directory where to start browsing

  • serverPicker – serverPicker wins over server/tunnel settings serverpicker=None will reuse latest server/tunnel that the user browsed to

  • server – server

  • tunnel – tunnel

  • pattern – glob regular expression for files selection

  • default – default filename selection

  • master – Tkinter master GUI

  • lockServer – allow users to change server settings

  • focus – what to focus in the GUI (‘filterDirs’,’filterFiles’)

  • favorite_list_location – OMFIT tree location which contains a possibly empty list of favorite file directories. To keep with the general omfit approach this should be a string.

  • pattern_list_location – OMFIT tree location which contains a possibly empty list of favorite search patterns. To keep with the general omfit approach this should be a string.

  • is_dir – (bool) Whether the requested file is a directory

go(directory=None)[source]
quit(how=None)[source]
dirs_double_event()[source]
dirs_back_event()[source]
files_select_event()[source]
ok_command()[source]
remote_command(command)[source]
filter_command(dir=None)[source]
get_filter()[source]
get_selection()[source]
set_filter(dir, pat)[source]
set_selection(file)[source]
manage_list(fav_list, obj, op)[source]
class omfit_classes.OMFITx.LoadFileDialog(*args, **kw)[source]

Bases: omfit_classes.OMFITx.FileDialog

File selection dialog which checks that the file exists.

ok_command()[source]
class omfit_classes.OMFITx.SaveFileDialog(directory=None, serverPicker='', server='localhost', tunnel='', pattern='*', default='', master=None, lockServer=False, focus='filterDirs', favorite_list_location=None, pattern_list_location=None, is_dir=False, title='File Browser')[source]

Bases: omfit_classes.OMFITx.FileDialog

File selection dialog which checks that the file exists before saving.

ok_command()[source]
omfit_classes.OMFITx.remoteFile(parent=None, transferRemoteFile=True, remoteFilename=None, server=None, tunnel=None, init_directory_location=None, init_pattern_location=None, favorite_list_location=None, pattern_list_location=None, is_dir=False)[source]

Opens up a dialogue asking filename, server/tunnel for remote file transfer This function is mostly used within the framework; for use in OMFIT GUI scripts please consider using the OMFITx.FilePicker and OMFITx.ObjectPicker functions instead.

Parameters
  • parent – Tkinter parent GUI

  • transferRemoteFile – [True,False,None] if True the remote file is transferred to the OMFITcwd directory

  • remoteFilename – initial string for remote filename

  • server – initial string for server

  • tunnel – initial string for tunnel

  • init_directory_location – The contents of this location are used to set the initial directory for file searches. If a file name is specified the directory will be determined from the file name and this input ignored. Otherwise, if set this will be used to set the initial directory.

  • init_pattern_location – The default pattern is ‘*’. If this is specified then the contents of the tree location will replace the default intial pattern.

  • favorite_list_location – OMFIT tree location which contains a possibly empty list of favorite file directories. To keep with the general omfit approach this should be a string.

  • pattern_list_location – OMFIT tree location which contains a possibly empty list of favorite search patterns. To keep with the general omfit approach this should be a string.

Returns

is controlled with transferRemoteFile parameter

  • string with local filename (if transferRemoteFile==True)

  • string with the filename (if transferRemoteFile==False)

  • tuple with the filename,server,tunnel (if transferRemoteFile==None)

omfit_classes.OMFITx.remote_sysinfo(server, tunnel='', quiet=False)[source]

This function retrieves information from a remote server (like the shell which is running there):

{'ARG_MAX': 4611686018427387903,
 'QSTAT': '',
 'SQUEUE': '/opt/slurm/default/bin/squeue',
 'environment': OMFITenv([]),
 'id': 6216321643098941518,
 'login': ['.cshrc', '.login'],
 'logout': ['.logout'],
 'shell': 'csh',
 'shell_path': '/bin/csh',
 'sysinfo': 'csh\nARG_MAX=4611686018427387903\nQSTAT=\nSQUEUE=/opt/slurm/default/bin/squeue\necho: No match.'
}

Information from the remote server is stored in a dictionary

Parameters
  • server – remote server

  • tunnel – via tunnel

  • quiet – suppress output or not

Returns

dictionary with info from the server

omfit_classes.OMFITx.manage_user_errors(command, reportUsersErrorByEmail=False, **kw)[source]

This method wraps around users calls of _OMFITpython scripts and manages printout of errors

Parameters

command – command to be executed

Returns

whatever the command returned

omfit_classes.OMFITx.execute(command_line, interactive_input=None, ignoreReturnCode=False, std_out=None, std_err=None, quiet=False, arguments='', script=None, use_bang_command='OMFIT_run_command.sh', progressFunction=None, extraButtons=None)[source]

This function allows execution of commands on the local workstation.

Parameters
  • command_line – string to be executed locally

  • interactive_input – interactive input to be passed to the command

  • ignoreReturnCode – ignore return code of the command

  • std_out – if a list is passed (e.g. []), the stdout of the program will be put there line by line

  • std_err – if a list is passed (e.g. []), the stderr of the program will be put there line by line

  • quiet – print command to screen or not

  • arguments – arguments that are passed to the command_line

  • script – string with script to be executed. script option substitutes %s with the automatically generated name of the script if script is a list or a tuple, then the first item should be the script itself and the second should be the script name

  • use_bang_command – Execute commands via OMFIT_run_command.sh script (useful to execute scripts within a given shell: #!/bin/…) If use_bang_command is a string, then the run script will take that filename. Notice that setting use_bang_command=True is not safe for multiple processes running in the same directory.

  • progressFunction – user function to which the std-out of the process is passed and returns values from 0 to 100 to indicate progress towards completion

  • extraButtons – dictionary with key/function that is used to add extra buttons to the GUI. The function receives a dictionary with the process std_out and pid

Returns

return code of the command

omfit_classes.OMFITx.remote_execute(server, command_line, remotedir, tunnel=None, interactive_input=None, ignoreReturnCode=False, std_out=None, std_err=None, quiet=False, arguments='', script=None, forceRemote=False, use_bang_command='OMFIT_run_command.sh', progressFunction=None, extraButtons=None, xterm=False)[source]

This function allows execution of commands on remote workstations. It has the logic to check if the remote workstation is the local workstation and in that case executes locally.

Parameters
  • server – server to connect and execute the command

  • command_line – string to be executed remotely (NOTE that if server=’’, the command is executed locally in the local directory)

  • remotedir – remote working directory, if remote directory does not exist it will be created

  • tunnel – tunnel to go through to connect to the server

  • interactive_input – interactive input to be passed to the command

  • ignoreReturnCode – ignore return code of the command

  • std_out – if a list is passed (e.g. []), the stdout of the program will be put there line by line

  • std_err – if a list is passed (e.g. []), the stderr of the program will be put there line by line

  • quiet – print command to screen or not

  • arguments – arguments that are passed to the command_line

  • script – string with script to be executed. script option substitutes %s with the automatically generated name of the script if script is a list or a tuple, then the first item should be the script itself and the second should be the script name

  • forceRemote – force remote connection even if server is localhost

  • use_bang_command – execute commands via OMFIT_run_command.sh script (useful to execute scripts within a given shell: #!/bin/…) If use_bang_command is a string, then the run script will take that filename. Notice that setting use_bang_command=True is not safe for multiple processes running in the same directory.

  • progressFunction – user function to which the std-out of the process is passed and returns values from 0 to 100 to indicate progress towards completion

  • extraButtons – dictionary with key/function that is used to add extra buttons to the GUI. The function receives a dictionary with the process std_out and pid

  • xterm – if True, launch command in its own xterm

Returns

return code of the command

omfit_classes.OMFITx.remote_upsync(server, local, remote, tunnel=None, ignoreReturnCode=False, keepRelativeDirectoryStructure='', quiet=False)[source]

Function to upload files/directories to remote server (possibly via tunnel connection)

NOTE: this function relies on rsync. There is no way to arbitrarily rename files with rsync. All rsync can do is move files to a different directory.

Parameters
  • server – server to connect and execute the command

  • local – local file(s) (string or list strings) to upsync

  • remote – remote directory or file to save files to

  • tunnel – tunnel to go through to connect to the server

  • ignoreReturnCode – whether to ignore return code of the rsync command

  • keepRelativeDirectoryStructure – string with common based directory of the locals files to be removed (usually equals local_dir)

  • quiet – print command to screen or not

Returns

return code of the rsync command (or True if keepRelativeDirectoryStructure and ignoreReturnCode and some rsync fail)

omfit_classes.OMFITx.remote_downsync(server, remote, local, tunnel=None, ignoreReturnCode=False, keepRelativeDirectoryStructure='', quiet=False)[source]

Function to download files/directories from remote server (possibly via tunnel connection)

NOTE: this function relies on rsync. There is no way to arbitrarily rename files with rsync. All rsync can do is move files to a different directory.

Parameters
  • server – server to connect and execute the command

  • remote – remote file(s) (string or list strings) to downsync

  • local – local directory or file to save files to

  • tunnel – tunnel to go through to connect to the server

  • ignoreReturnCode – whether to ignore return code of the rsync command

  • keepRelativeDirectoryStructure – string with common based directory of the remote files to be removed (usually equals remote_dir)

  • quiet – print command to screen or not

  • use_scp – (bool) If this flag is True remote_downsync will be executed with “scp” instead of “rsync”. Use for increased download speed. (default: False)

Returns

return code of the rsync command (or True if keepRelativeDirectoryStructure and ignoreReturnCode and some rsync fail)

omfit_classes.OMFITx.mdsvalue(server, treename=None, shot=None, TDI=None)[source]
Parameters
  • server – MDS+ server to connect to

  • treename – name of the MDS+ tree

  • shot – shot number

  • TDI – TDI string to be executed

Returns

result of TDI command

class omfit_classes.OMFITx.IDL(module_root, server=None, tunnel=None, executable=None, workdir=None, remotedir=None, clean=False)[source]

Bases: object

This class provides a live IDL session via the pidly module: https://pypi.python.org/pypi/pyIDL/ In practice this class wraps the pidly.IDL session so that it can handle SERVERS remote connections (including tunneling) and directory management the OMFIT way. The IDL executable is taken from the idl entry of this server under OMFIT[‘MainSettings’][‘SERVER’].

Local and remote working directories are specified in root[‘SETTINGS’][‘SETUP’][‘workDir’] and root[‘SETTINGS’][‘REMOTE_SETUP’][‘workDir’].

Server and tunnel are specified in root[‘SETTINGS’][‘REMOTE_SETUP’][‘server’] and root[‘SETTINGS’][‘REMOTE_SETUP’][‘tunnel’].

If the tunnel is an empty string, the connection to the remote server is direct. If server is an empty string, everything will occur locally and the remote working directory will be ignored.

Parameters
  • module_root – root of the module (e.g. root)

  • server – override module server

  • tunnel – override module tunnel

  • executable – override the executable is taken from the idl entry of this server under OMFIT[‘MainSettings’][‘SERVER’].

  • workdir – override module local working directory

  • remotedir – override module remote working directory

  • clean

    clear local/remote working directories

    • ”local”: clean local working directory only

    • ”local_force”: force clean local working directory only

    • ”remote”: clean remote working directory only

    • ”remote_force”: force clean remote working directory only

    • True: clean both

    • ”force”: force clean both

    • False: clean neither [DEFAULT]

>>> idl=OMFITx.IDL(OMFIT['EFIT'])
>>> idl('$pwd')
>>> idl('x = total([1, 1], /int)')
>>> print(idl.x)
>>> tmp=OMFITgeqdsk(OMFITsrc+'/../samples/g133221.01000')
>>> idl.upsync([tmp])
>>> idl.downsync(['g133221.01000'])
upsync(inputs=[], keepRelativeDirectoryStructure=False, ignoreReturnCode=False, quiet=False)[source]

Function used to upload files from the local working directory to remote IDL directory

Parameters
  • inputs – list of input objects or path to files, which will be deployed in the local or remote working directory. To deploy objects with a different name one can specify tuples (inputObject,’deployName’)

  • ignoreReturnCode – whether to ignore return code of the rsync command

  • quiet – print command to screen or not

downsync(outputs=[], ignoreReturnCode=False, keepRelativeDirectoryStructure=False, quiet=False)[source]

Function used to download files from the remote IDL directory to the local working directory

Parameters
  • outputs – list of output files which will be fetched from the remote directory

  • keepRelativeDirectoryStructure – [True/False] keep relative directory structure of the remote files

  • ignoreReturnCode – whether to ignore return code of the rsync command

  • quiet – print command to screen or not

omfit_classes.OMFITx.initWorkdir(module_root, server=None, tunnel=None, workdir=None, remotedir=None, clean=True, quiet=False)[source]

High level function to simplify initialization of directories within a module. This function will: 1) Create and clear local and remote working directories 2) Change directory to local working directory

Server and tunnel are specified in root[‘SETTINGS’][‘REMOTE_SETUP’][‘server’] and root[‘SETTINGS’][‘REMOTE_SETUP’][‘tunnel’]

Local and remote working directories are specified in root[‘SETTINGS’][‘SETUP’][‘workDir’] and root[‘SETTINGS’][‘REMOTE_SETUP’][‘workDir’]

Parameters
  • module_root – root of the module

  • server – string that overrides module server

  • tunnel – string that overrides module tunnel

  • workdir – string that overrides module local working directory

  • remotedir – string that overrides module remote working directory

  • clean

    clear local/remote working directories

    • ”local”: clean local working directory only

    • ”local_force”: force clean local working directory only

    • ”remote”: clean remote working directory only

    • ”remote_force”: force clean remote working directory only

    • True: clean both

    • ”force”: force clean both

    • False: clean neither

  • quiet – print command to screen or not

Returns

strings for local and remote directories (None if there was a problem in either one)

omfit_classes.OMFITx.executable(module_root=None, inputs=[], outputs=[], clean=True, interactive_input=None, server=None, tunnel=None, executable=None, workdir=None, remotedir=None, ignoreReturnCode=True, std_out=None, std_err=None, quiet=False, queued=False, keepRelativeDirectoryStructure=True, arguments='', script=None, forceRemote=False, progressFunction=None, use_bang_command='OMFIT_run_command.sh', extraButtons=None, xterm=False, clean_after=False)[source]

High level function that simplifies local/remote execution of software within a module.

This function will: 1. cd to the local working directory 2. Clear local/remote working directories [True] by default 3. Deploy the the “input” objects to local working directory 4. Upload files them remotely 5. Executes the software 6. Download “output” files to local working directory

Executable command is specified in root[‘SETTINGS’][‘SETUP’][‘executable’]

Local and remote working directories are specified in root[‘SETTINGS’][‘SETUP’][‘workDir’] and root[‘SETTINGS’][‘REMOTE_SETUP’][‘workDir’].

Server and tunnel are specified in root[‘SETTINGS’][‘REMOTE_SETUP’][‘server’] and root[‘SETTINGS’][‘REMOTE_SETUP’][‘tunnel’].

If the tunnel is an empty string, the connection to the remote server is direct. If server is an empty string, everything will occur locally and the remote working directory will be ignored.

Parameters
  • module_root – root of the module (e.g. root) used to set default values for ‘executable’,’server’,’tunnel’,’workdir’, ‘remotedir’ if module_root is None or module_root is OMFIT then ‘executable’,’server’,’tunnel’,’workdir’, ‘remotedir’ must be specified

  • inputs – list of input objects or path to files, which will be deployed in the local or remote working directory. To deploy objects with a different name one can specify tuples (inputObject,’deployName’)

  • outputs – list of output files which will be fetched from the remote directory

  • clean

    clear local/remote working directories

    • ”local”: clean local working directory only

    • ”local_force”: force clean local working directory only

    • ”remote”: clean remote working directory only

    • ”remote_force”: force clean remote working directory only

    • True: clean both [DEFAULT]

    • ”force”: force clean both

    • False: clean neither

  • arguments – arguments which will be passed to the executable

  • interactive_input – interactive input to be passed to the executable

  • server – override module server

  • tunnel – override module tunnel

  • executable – override module executable

  • workdir – override module local working directory

  • remotedir – override module remote working directory

  • ignoreReturnCode – ignore return code of executable

  • std_out – if a list is passed (e.g. []), the stdout of the program will be put there line by line; if a string is passed and bool(queued), this should indicate the path of the file that gives the stdout of the queued job

  • std_err – if a list is passed (e.g. []), the stderr of the program will be put there line by line; if a string is passed and bool(queued), this should indicate the path of the file that gives the stdout of the queued job

  • quiet – if True, suppress output to the command box

  • keepRelativeDirectoryStructure – [True/False] keep relative directory structure of the remote files

  • script – string with script to be executed. script option requires %s in the command line location where you want the script filename to appear if script is a list or a tuple, then the first item should be the script itself and the second should be the script name

  • forceRemote – force remote connection even if server is localhost

  • progressFunction – user function to which the std-out of the process is passed and returns values from 0 to 100 to indicate progress towards completion

  • queued – If cast as bool is True, invokes manage_job, using queued as qsub_findID keyword of manage_job, and also takes over std_out and std_err

  • use_bang_command – Execute commands via OMFIT_run_command.sh script (useful to execute scripts within a given shell: #!/bin/…) If use_bang_command is a string, then the run script will take that filename. Notice that setting use_bang_command=True is not safe for multiple processes running in the same directory.

  • extraButtons – dictionary with key/function that is used to add extra buttons to the GUI. The function receives a dictionary with the process std_out and pid

  • xterm – if True, launch the command in its own xterm

  • clean_after – (bool) If this flag is True, the remote directory will be removed once the outputs have been transferred to the local working directory. The remote directory have OMFIT in it’s name. (default: False)

  • use_scp – (bool) If this flag is True, the remote downsync of data will use the “scp” command instead of “rsync”. This should be used for increased download speed. (default: False)

Returns

return code of the command

omfit_classes.OMFITx.remote_python(module_root=None, python_script=None, target_function=None, namespace={}, executable=None, forceRemote=False, pickle_protocol=2, clean_local=False, **kw)[source]

Execute a Python target_function that is self-contained in a Python python_script, Useful to execute a Python module as a separate process on a local (or remote) workstation. This fuction relies on the OMFITx.executable function and additional keyword arguments are passed to it.

Parameters
  • module_root – root of the module (e.g. root) used to set default values for ‘executable’,’server’,’tunnel’,’workdir’, ‘remotedir’ if module_root is None or module_root is OMFIT then ‘executable’,’server’,’tunnel’,’workdir’, ‘remotedir’ must be specified

  • python_script – OMFITpythonTask (or string) to execute

  • target_function – function in the python_script that will be called

  • namespace – dictionary with variables passed to the target_function

  • executable – python executable (if None then is set based on SERVER)

  • forceRemote – force remote connection even if server is localhost

  • pickle_protocol – pickle protocol version (use 2 for Python2/3 compatibility)

  • clean_local – (bool) If Flag is True, this cleans and deletes the local working directory after result to be returned has been loaded into memory. The directory must have OMFIT somewhere in the name as a safety measure. (default: False).

  • **kw – additional arguments are passed to the underlying OMFITx.executable function

Returns

returns the output of the target_function

class omfit_classes.OMFITx.manage_job(module_root, qsub_job, server=None, tunnel=None, remotedir=None, qsub_findID='(?i)\\\'[^\\\']+\\\'|"[^"]+"|([0-9]{3,}[\\[\\]\\.\\w-]*)\\s*', qstat_command=None, qstat_findStatus='(?i)\\s+[rqwecpd]{1,2}\\s+', qdel_command=None)[source]

Bases: object

identify_queuing_system()[source]

This function identifies qstat and qdel commands to be used, if these were not specified

Returns

None

qstat(quiet=True, sleep=0, std_out=None, std_err=None)[source]

qstat command (or equivalent)

Parameters
  • quiet

    controls the output that is displayed to screen

    • False: prints the full output of the command

    • select: prints only the line involving the right jobID

    • True: nothing gets printed

  • sleep – grace time in seconds before checking

Returns

the status of the job, if the jobID is found in the output of qstat. Otherwise None.

qdel(std_out=None, std_err=None)[source]

qdel command (or equivalent)

wait(timeout=None, sleep=10)[source]

Wait for a job to finish

Parameters
  • timeout – Timeout in seconds after which the wait function will return

  • sleep – Interval in seconds between checks

Returns

string with the last seen job status. If the job was not there, then a ‘?’ is returned

wait_print(output_file='', error_file='', progressFunction=None, extraButtons=None)[source]

Wait for a job to finish and follow the output file and print the error file that the job generates

Parameters
  • output_file – output file that will be followed until the job ends (on the std_out)

  • error_file – error file that will be printed when the job ends (on the std_err)

  • progressFunction – user function to which the std-out of the process is passed and returns values from 0 to 100 to indicate progress towards completion

  • extraButtons – dictionary with key/function that is used to add extra buttons to the GUI. The function receives a dictionary with the process std_out and pid

omfit_classes.OMFITx.submit_job(module_root, batch_command, environment='', partition='', partition_flag=None, ntasks=1, nproc_per_task=1, job_time='1', memory='2GB', batch_type='SLURM', batch_option='', out_name='Job', **kw)[source]

Launch a (SLURM,PBS) job

Parameters
  • module_root – The module instance from which servers, etc. are culled

  • batch_command – A multi-line string or list of strings that should be executed

  • environment – A string to be executed to set up the environment before launching the batch job

  • partition – A string to be inserted into the batch script that indicates which partition(s) (comma separated) to run on; if None, execute batch_command serially

  • partition_flag – A string to be inserted before the partion names which matches the system configuration (e.g. -p, –qos)

  • nproc_per_task – Number of processors to be used by each line of batch_command

  • job_time – Max wall time of each line of batch_command - see sbatch –time option (default 1 minute)

  • memory – Max memory usage of each cpu utilized by batch_command - see sbatch –mem-per-cpu option (default 2GB)

  • batch_type – Type of batch system (SLURM, PBS)

  • batch_option

    A string specifying any additional batch options in the file header; It is inserted in raw form after the other batch options, so should include #SBATCH or #PBS if it is a batch type option, and it could be a multiline string of options

    (expected to contain the relevant #{SBATCH,PBS})

  • out_name – Name used for the output and error files

  • **kw – All other keywords are passed to OMFITx.executable

Returns

None

omfit_classes.OMFITx.job_array(module_root, batch_lines, environment='', partition='', ntasks=1, nproc_per_task=1, job_time='1', memory='2GB', batch_type='SLURM', partition_flag=None, batch_option='', **kw)[source]

Launch a (SLURM,PBS) job array

Parameters
  • module_root – The module instance from which servers, etc. are culled

  • batch_lines – A multi-line string or list of strings that should be executed in parallel

  • environment – A string to be executed to set up the environment before launching the batch job

  • partition – A string to be inserted into the batch script that indicates which partition(s) (comma separated) to run on; if None, execute batch_lines serially

  • partition_flag – A string to be inserted before the partion names which matches the system configuration (e.g. -p, –qos)

  • nproc_per_task – Number of processors to be used by each line of batch_lines

  • job_time – Max wall time of each line of batch_lines - see sbatch –time option (default 1 minute)

  • memory – Max memory usage of each cpu utilized by batch_lines - see sbatch –mem-per-cpu option (default 2GB)

  • batch_type – Type of batch system (SLURM, PBS)

  • batch_option

    A string specifying any additional batch options in the file header; It is inserted in raw form after the other batch options, so should include #SBATCH or #PBS if it is a batch type option, and it could be a multiline string of options

    (expected to contain the relevant #{SBATCH,PBS})

  • **kw – All other keywords are passed to OMFITx.executable

Returns

None

omfit_classes.OMFITx.ide(module_root, ide, script=None, inputs=[], outputs=[], clean=True, interactive_input=None, server=None, tunnel=None, executable=None, workdir=None, remotedir=None, ignoreReturnCode=True, std_out=None, std_err=None, keepRelativeDirectoryStructure=True, arguments='')[source]

High level function that simplifies local/remote execution of Integrated Development Environments (IDEs)

This function will: 1. cd to the local working directory 2. Clear local/remote working directories 3. Deploy the the “input” objects to local working directory 4. Upload files them remotely 5. Executes the IDE 6. Download “output” files to local working directory

The IDE executable, server and tunnel, and local and remote working directory depend on the MainSettings[‘SERVER’][ide]

If the tunnel is an empty string, the connection to the remote server is direct. If server is an empty string, everything will occur locally and the remote working directory will be ignored.

Parameters
  • module_root – root of the module (e.g. root) or OMFIT itself

  • ide – what IDE to execute (e.g. idl or matlab)

  • inputs – list of input objects or path to files, which will be deployed in the local or remote working directory. To deploy objects with a different name one can specify tuples (inputObject,’deployName’)

  • outputs – list o output files which will be fetched from the remote directory

  • clean

    clear local/remote working directories

    • ”local”: clean local working directory only

    • ”local_force”: force clean local working directory only

    • ”remote”: clean remote working directory only

    • ”remote_force”: force clean remote working directory only

    • True: clean both

    • ”force”: force clean both

    • False: clean neither

  • arguments – arguments which will be passed to the executable

  • interactive_input – interactive input to be passed to the executable

  • server – override module server

  • tunnel – override module tunnel

  • executable – override module executable

  • workdir – override module local working directory

  • remotedir – override module remote working directory

  • ignoreReturnCode – ignore return code of executable

  • std_out – if a list is passed (e.g. []), the stdout of the program will be put there line by line

  • std_err – if a list is passed (e.g. []), the stderr of the program will be put there line by line

  • script – string with script to be executed. script option requires that %s in the command line location where you want the script filename to appear

Returns

return code of the command

class omfit_classes.OMFITx.archive(module_root, server=None, tunnel=None, remotedir=None, storedir=None, store_command=None, restore_command=None)[source]

Bases: object

High level function that simplifies archival of simulations files

Parameters
  • module_root – root of the module (e.g. root)

  • server – override module server

  • tunnel – override module tunnel

  • storedir – directory where ZIP files are stored if storedir is None, then this will be sought under module_root[‘SETTINGS’][‘REMOTE_SETUP’][‘storedir’] and finally under SERVER[server][‘storedir’]

  • store_command – (optional) user-defined store command issued to store data (eg. for HPSS usage). Strings {remotedir} {storedir} and {filename} are substituted with actual remotedir, storedir and filename if store_command is None, then this will be sought under module_root[‘SETTINGS’][‘REMOTE_SETUP’][‘store_command’] and finally under SERVER[server][‘store_command’]

  • restore_command – (optional) user-defined restore command issued to restore data (eg. for HPSS usage). Strings {remotedir} {storedir} and {filename} are substituted with actual remotedir, storedir and filename if restore_command is None, then this will be sought under module_root[‘SETTINGS’][‘REMOTE_SETUP’][‘restore_command’] and finally under SERVER[server][‘restore_command’]

store(remotedir, filename, quiet=False, background=False, force=False, store_command=None)[source]

Store remotedir to filename ZIP file

Parameters
  • remotedir – remote directory to archive (usually: root[‘SETTINGS’][‘REMOTE_SETUP’][‘workdir’]) This parameter needs to be specified because the working directory can change.

  • filename – filename to be used for archival

  • quiet – print store process to screen

  • background – put creation of ZIP archive in background (ignored if store_commnad is used)

  • force – force store even if remotedir does not have OMFIT substring in it

  • store_command – store command to be used

Returns

object instance

restore(remotedir, quiet=False, background=False, force=False, restore_command=None)[source]

Restore filename ZIP file to remotedir

Parameters
  • remotedir – remote directory to deflate to (usually: root[‘SETTINGS’][‘REMOTE_SETUP’][‘workdir’]) This parameter needs to be specified because the working directory can change.

  • quiet – print restore process to screen

  • background – put restore of ZIP archive in background (ignored if restore_commnad is used)

  • force – force restore even if remotedir does not have OMFIT substring in it

  • restore_command – restore command to be used

Returns

object instance

omfit_classes.OMFITx.glob(server='', file_filter='*', workDir='./', tunnel='', search_upstream=False)[source]

Returns list of files in remote directory

Params server

remote server

Params file_filter

regular expression to filter files on (* by default)

Params workdir

remote working directory

Params tunnel

remote tunnel to use

Params search_upstream

T/F: Search for the file in parent directories until it is found or / is reached.

omfit_classes.OMFITx.email(to, subject, message, attachments=None, sysinfo=True, **kw)[source]

Send an email

Parameters
  • to – must be one of 1) a single address as a string 2) a string of comma separated multiple address 3) a list of string addresses

  • subject – String

  • message – String

  • attachments – List of OMFITobjects or path to files

  • sysinfo – Include system info at the bottom of the message

  • **kw – Extra arguments passed to the send_email utility function

Returns

string that user can decide to print to screen

omfit_classes.OMFITx.f2py(filename, modname)[source]

Run f2py on filename to get modname.so and return modname

omfit_classes.OMFITx.titleGUI(*args, **kw)[source]
omfit_classes.OMFITx.lock(*args, **kw)[source]
omfit_classes.OMFITx.Checkbutton(*args, **kw)[source]
omfit_classes.OMFITx.CheckButton(*args, **kw)[source]
omfit_classes.OMFITx.compoundGUI(*args, **kw)[source]
omfit_classes.OMFITx.closeGUI(*args, **kw)[source]
omfit_classes.OMFITx.closeAllGUIs(*args, **kw)[source]
omfit_classes.OMFITx.updateGUI(*args, **kw)[source]
omfit_classes.OMFITx.end(*args, **kw)[source]
omfit_classes.OMFITx.mainSettings_ShotTimeDevice_GUI(*args, **kw)[source]
omfit_classes.OMFITx.AskUser(*args, **kw)[source]
omfit_classes.OMFITx.custom_ttk_style(type, **kw)[source]

Generates ttk styles dynamiecally and buffers them

Parameters
  • type – one of the ttk_styles

  • kw – ttk style configuration attributes

Returns

dynamically generated ttk style name

omfit_classes.OMFITx.update_gui_theme(location=None)[source]

One place to set the ttk theme, reset custom styles, and update main gui elements.

Returns

Utilities

utils_math

omfit_classes.utils_math.np_errors(**kw)[source]
omfit_classes.utils_math.np_ignored(f)
omfit_classes.utils_math.np_raised(f)
omfit_classes.utils_math.np_printed(f)
omfit_classes.utils_math.np_warned(f)
omfit_classes.utils_math.get_array_hash(y)[source]

Get the hash for an array. A hash is an fixed sized integer that identifies a particular value, comparing this integer is faster than comparing the whole array (if you already have it stored).

Example:

y1 = np.arange(3)
y2 = np.arange(3) + 1
assert(get_array_hash(y1) == get_array_hash(y2)) # will raise an error
Parameters

y – np.ndarray.

Returns

hash.

omfit_classes.utils_math.ismember(A, B)[source]

Mimics the Matlab ismember() function to look for occurrences of A into B

Parameters
  • A – number or list/array

  • B – number or list/array

Returns

returns lia, locb lists. lia: returns ‘True’ where the data in A is found in B and ‘False’ elsewhere. locb: contains the lowest index in B for each value in A that is a member of B. while it contains ‘None’ elsewhere (where A is not a member of B)

omfit_classes.utils_math.map1d(vecfun, ndarr, axis=- 1, args=[], **kwargs)[source]

Map a vector operation to a single axis of any multi-dimensional array.

input:
param vecfun

A function that takes a 1D vector input.

param ndarr

A np.ndarray.

param axis

Axis of the array on which vecfun will operate.

param *args

Additional arguments passed to vecfun.

param **kwargs

Additional key word arguments are also passed to vecfun.

output:

A new array of same dimensionality as ndarr.

omfit_classes.utils_math.uniform_resample_2D_line(X0, Y0, n)[source]

Resampling of 2D line with uniform distribution of points along the line

Parameters
  • X0 – input x coordinates of 2D path

  • Y0 – input y coordinates of 2D path

  • n – number of points for output 2d path

Returns

tuple of x,y uniformly resampled path

omfit_classes.utils_math.stepwise_data_index(y, broaden=0)[source]

This function returns the indices that one must use to reproduce the step-wise data with the minimum number of points. In the ascii-art below, it returns the indices of the crosses. The original data can then be reproduced by nearest-neighbor interpolation:

Y ^
  |              x.....x
  |        x....x
  |  x....x             x......x
  |
  0-----------------------------> X

Hint: can also be used to compress linearly varying data:

i = stepwise_data_index(np.gradient(y))

The original data can then be reproduced by linear interpolation.

Parameters
  • y – input array

  • broaden – return broaden points around the x’s

Returns

indices for compression

omfit_classes.utils_math.pack_points(n, x0, p)[source]

Packed points distribution between -1 and 1

Parameters
  • n – number of points

  • x0 – pack points around x0, a float between -1 and 1

  • p – packing proportional to p factor >0

Returns

packed points distribution between -1 and 1

omfit_classes.utils_math.simplify_polygon(x, y, tolerance=None, preserve_topology=True)[source]

Returns a simplified representation of a polygon

Parameters
  • x – array of x coordinates

  • y – array of y coordinates

  • tolerance – all points in the simplified object will be within the tolerance distance of the original geometry if tolerance is None, then a tolerance guess is returned

  • preserve_topology – by default a slower algorithm is used that preserves topology

Returns

x and y coordinates of simplified polygon geometry if tolerance is None, then a tolerance guess is returned

omfit_classes.utils_math.torecarray(data2D, names=None)[source]

Converts a 2D list of lists, or a dictionary of dictionaries with 1d elements to a recarray

Parameters
  • data2D – 2D input data

  • names – recarray columns names

Returns

recarray

omfit_classes.utils_math.dictdict_2_dictlist(data)[source]

dictionary of dictionaries with 1d elements, to dictionary of lists

Parameters

data – dictionary of dictionaries

Returns

dictionary of dictionaries

omfit_classes.utils_math.flatten_iterable(l)[source]

flattens iterable of iterables structures

Parameters

l – input iterable structure

Returns

generator with items in the structure

omfit_classes.utils_math.lists_loop_generator(*argv, **kwargs)[source]

This function generates a list of items combining the entries from multiple lists

Parameters
  • *args – multiple 1D lists (of equal length, or of 0 or 1 element length)

  • permute – output all element permutations

Returns

list of items combining the entries from multiple lists

Examples:

shots = 123
times = [1,2]
runs = 'a01'
shots, times, runs = lists_loop_generator(shots,times,runs)
for shot, time, run in zip(shots,times,runs):
    myCode(shot,time,run)

shots = [123,446]
times = [1,2]
runs = 'a01'
shots, times, runs = lists_loop_generator(shots,times,runs)

now permute will return lists of length 2 or length 4

omfit_classes.utils_math.is_uncertain(var)[source]
Parameters

var – Variable or array to test

Returns

True if input variable or array is uncertain

omfit_classes.utils_math.wrap_array_func(func)[source]
omfit_classes.utils_math.chunks(l, n)[source]

Yield successive n-sized chunks from l

omfit_classes.utils_math.unsorted_unique(x)[source]

Find the unique elements of an array The order of the elements is according to their first appearance

Parameters

x – input array or list

Returns

unique array or list (depending on whether the input was an array or a list)

omfit_classes.utils_math.closestIndex(my_list, my_number=0)[source]

Given a SORTED iterable (a numeric array or list of numbers) and a numeric scalar my_number, find the index of the number in the list that is closest to my_number

Parameters
  • my_list – Sorted iterable (list or array) to search for number closest to my_number

  • my_number – Number to get close to in my_list

Returns

Index of my_list element closest to my_number

Note

If two numbers are equally close, returns the index of the smallest number.

omfit_classes.utils_math.order_axes(M, order)[source]

arbitrary sort axes of a multidimensional array

Parameters
  • M – input multidimensional array

  • order – integers with list of axes to be ordered

Returns

M with reordered axes

omfit_classes.utils_math.qgriddata(px, py, data, X, Y, func=<function sum>, fill=nan)[source]
Parameters
  • px

  • py

  • data

  • X

  • Y

  • func

Returns

class omfit_classes.utils_math.RectBivariateSplineNaN(Z, R, Q, *args, **kw)[source]

Bases: object

ev(*args)[source]
class omfit_classes.utils_math.interp1e(x, y, *args, **kw)[source]

Bases: externalImports.interp1d

Shortcut for scipy.interpolate.interp1d with fill_value=’extrapolate’ and bounds_error=False as defaults

Interpolate a 1-D function.

x and y are arrays of values used to approximate some function f: y = f(x). This class returns a function whose call method uses interpolation to find the value of new points.

Parameters: x : (N,) array_like

A 1-D array of real values.

y(…,N,…) array_like

A N-D array of real values. The length of y along the interpolation axis must be equal to the length of x.

kindstr or int, optional

Specifies the kind of interpolation as a string or as an integer specifying the order of the spline interpolator to use. The string has to be one of ‘linear’, ‘nearest’, ‘nearest-up’, ‘zero’, ‘slinear’, ‘quadratic’, ‘cubic’, ‘previous’, or ‘next’. ‘zero’, ‘slinear’, ‘quadratic’ and ‘cubic’ refer to a spline interpolation of zeroth, first, second or third order; ‘previous’ and ‘next’ simply return the previous or next value of the point; ‘nearest-up’ and ‘nearest’ differ when interpolating half-integers (e.g. 0.5, 1.5) in that ‘nearest-up’ rounds up and ‘nearest’ rounds down. Default is ‘linear’.

axisint, optional

Specifies the axis of y along which to interpolate. Interpolation defaults to the last axis of y.

copybool, optional

If True, the class makes internal copies of x and y. If False, references to x and y are used. The default is to copy.

bounds_errorbool, optional

If True, a ValueError is raised any time interpolation is attempted on a value outside of the range of x (where extrapolation is necessary). If False, out of bounds values are assigned fill_value. By default, an error is raised unless fill_value="extrapolate".

fill_valuearray-like or (array-like, array_like) or “extrapolate”, optional
  • if a ndarray (or float), this value will be used to fill in for requested points outside of the data range. If not provided, then the default is NaN. The array-like must broadcast properly to the dimensions of the non-interpolation axes.

  • If a two-element tuple, then the first element is used as a fill value for x_new < x[0] and the second element is used for x_new > x[-1]. Anything that is not a 2-element tuple (e.g., list or ndarray, regardless of shape) is taken to be a single array-like argument meant to be used for both bounds as below, above = fill_value, fill_value.

    New in version 0.17.0.

  • If “extrapolate”, then points outside the data range will be extrapolated.

    New in version 0.17.0.

assume_sortedbool, optional

If False, values of x can be in any order and they are sorted first. If True, x has to be an array of monotonically increasing values.

Attributes: fill_value

Methods: __call__

See Also: splrep, splev

Spline interpolation/smoothing based on FITPACK.

UnivariateSpline : An object-oriented wrapper of the FITPACK routines. interp2d : 2-D interpolation

Calling interp1d with NaNs present in input values results in undefined behaviour.

Input values x and y must be convertible to float values like int or float.

Examples: >>> import matplotlib.pyplot as plt >>> from scipy import interpolate >>> x = np.arange(0, 10) >>> y = np.exp(-x/3.0) >>> f = interpolate.interp1d(x, y)

>>> xnew = np.arange(0, 9, 0.1)
>>> ynew = f(xnew)   # use interpolation function returned by `interp1d`
>>> plt.plot(x, y, 'o', xnew, ynew, '-')
>>> plt.show()

Initialize a 1-D linear interpolation class.

dtype
class omfit_classes.utils_math.interp1dPeriodic(t, y, *args, **kw)[source]

Bases: object

1D linear interpolation for periodic functions

Parameters
  • t – array Independent variable: Angle (rad). Doesn’t have to be on any particular interval as it will be unwrapped.

  • y – array matching length of t Dependent variable: values as a function of t

class omfit_classes.utils_math.uinterp1d(x, y, kind='linear', axis=- 1, copy=True, bounds_error=True, fill_value=nan, assume_sorted=False, std_pow=2, **kw)[source]

Bases: omfit_classes.utils_math.interp1e

Adjusted scipy.interpolate.interp1d (documented below) to interpolate the nominal_values and std_devs of an uncertainty array.

NOTE: uncertainty in the x data is neglected, only uncertainties in the y data are propagated.

Parameters

std_pow – float. Uncertainty is raised to this power, interpolated, then lowered. (Note std_pow=2 interpolates the variance, which is often used in fitting routines).

Additional arguments and key word arguments are as in interp1e (documented below).

Examples

>> x = np.linspace(0,2*np.pi,30) >> u = unumpy.uarray(np.cos(x),np.random.rand(len(x))) >> >> fi = utils.uinterp1d(x,u,std_pow=2) >> xnew = np.linspace(x[0],x[-1],1e3) >> unew = fi(xnew) >> >> f = figure(num=’uniterp example’) >> f.clf() >> ax = f.use_subplot(111) >> uerrorbar(x,u) >> uband(xnew,unew)

interp1e Documentation:

Shortcut for scipy.interpolate.interp1d with fill_value=’extrapolate’ and bounds_error=False as defaults

Interpolate a 1-D function.

x and y are arrays of values used to approximate some function f: y = f(x). This class returns a function whose call method uses interpolation to find the value of new points.

Parameters: x : (N,) array_like

A 1-D array of real values.

y(…,N,…) array_like

A N-D array of real values. The length of y along the interpolation axis must be equal to the length of x.

kindstr or int, optional

Specifies the kind of interpolation as a string or as an integer specifying the order of the spline interpolator to use. The string has to be one of ‘linear’, ‘nearest’, ‘nearest-up’, ‘zero’, ‘slinear’, ‘quadratic’, ‘cubic’, ‘previous’, or ‘next’. ‘zero’, ‘slinear’, ‘quadratic’ and ‘cubic’ refer to a spline interpolation of zeroth, first, second or third order; ‘previous’ and ‘next’ simply return the previous or next value of the point; ‘nearest-up’ and ‘nearest’ differ when interpolating half-integers (e.g. 0.5, 1.5) in that ‘nearest-up’ rounds up and ‘nearest’ rounds down. Default is ‘linear’.

axisint, optional

Specifies the axis of y along which to interpolate. Interpolation defaults to the last axis of y.

copybool, optional

If True, the class makes internal copies of x and y. If False, references to x and y are used. The default is to copy.

bounds_errorbool, optional

If True, a ValueError is raised any time interpolation is attempted on a value outside of the range of x (where extrapolation is necessary). If False, out of bounds values are assigned fill_value. By default, an error is raised unless fill_value="extrapolate".

fill_valuearray-like or (array-like, array_like) or “extrapolate”, optional
  • if a ndarray (or float), this value will be used to fill in for requested points outside of the data range. If not provided, then the default is NaN. The array-like must broadcast properly to the dimensions of the non-interpolation axes.

  • If a two-element tuple, then the first element is used as a fill value for x_new < x[0] and the second element is used for x_new > x[-1]. Anything that is not a 2-element tuple (e.g., list or ndarray, regardless of shape) is taken to be a single array-like argument meant to be used for both bounds as below, above = fill_value, fill_value.

    New in version 0.17.0.

  • If “extrapolate”, then points outside the data range will be extrapolated.

    New in version 0.17.0.

assume_sortedbool, optional

If False, values of x can be in any order and they are sorted first. If True, x has to be an array of monotonically increasing values.

Attributes: fill_value

Methods: __call__

See Also: splrep, splev

Spline interpolation/smoothing based on FITPACK.

UnivariateSpline : An object-oriented wrapper of the FITPACK routines. interp2d : 2-D interpolation

Calling interp1d with NaNs present in input values results in undefined behaviour.

Input values x and y must be convertible to float values like int or float.

Examples: >>> import matplotlib.pyplot as plt >>> from scipy import interpolate >>> x = np.arange(0, 10) >>> y = np.exp(-x/3.0) >>> f = interpolate.interp1d(x, y)

>>> xnew = np.arange(0, 9, 0.1)
>>> ynew = f(xnew)   # use interpolation function returned by `interp1d`
>>> plt.plot(x, y, 'o', xnew, ynew, '-')
>>> plt.show()

Initialize a 1-D linear interpolation class.

dtype
class omfit_classes.utils_math.uinterp1e(x, y, kind='linear', axis=- 1, copy=True, assume_sorted=False, std_pow=2, **kw)[source]

Bases: omfit_classes.utils_math.uinterp1d

Adjusted unterp1d to extrapolate

Arguments and key word arguments are as in uinterp1d (documented below).

Uinterp1d Documentation

Adjusted scipy.interpolate.interp1d (documented below) to interpolate the nominal_values and std_devs of an uncertainty array.

NOTE: uncertainty in the x data is neglected, only uncertainties in the y data are propagated.

Parameters

std_pow – float. Uncertainty is raised to this power, interpolated, then lowered. (Note std_pow=2 interpolates the variance, which is often used in fitting routines).

Additional arguments and key word arguments are as in interp1e (documented below).

Examples

>> x = np.linspace(0,2*np.pi,30) >> u = unumpy.uarray(np.cos(x),np.random.rand(len(x))) >> >> fi = utils.uinterp1d(x,u,std_pow=2) >> xnew = np.linspace(x[0],x[-1],1e3) >> unew = fi(xnew) >> >> f = figure(num=’uniterp example’) >> f.clf() >> ax = f.use_subplot(111) >> uerrorbar(x,u) >> uband(xnew,unew)

interp1e Documentation:

Shortcut for scipy.interpolate.interp1d with fill_value=’extrapolate’ and bounds_error=False as defaults

Interpolate a 1-D function.

x and y are arrays of values used to approximate some function f: y = f(x). This class returns a function whose call method uses interpolation to find the value of new points.

Parameters: x : (N,) array_like

A 1-D array of real values.

y(…,N,…) array_like

A N-D array of real values. The length of y along the interpolation axis must be equal to the length of x.

kindstr or int, optional

Specifies the kind of interpolation as a string or as an integer specifying the order of the spline interpolator to use. The string has to be one of ‘linear’, ‘nearest’, ‘nearest-up’, ‘zero’, ‘slinear’, ‘quadratic’, ‘cubic’, ‘previous’, or ‘next’. ‘zero’, ‘slinear’, ‘quadratic’ and ‘cubic’ refer to a spline interpolation of zeroth, first, second or third order; ‘previous’ and ‘next’ simply return the previous or next value of the point; ‘nearest-up’ and ‘nearest’ differ when interpolating half-integers (e.g. 0.5, 1.5) in that ‘nearest-up’ rounds up and ‘nearest’ rounds down. Default is ‘linear’.

axisint, optional

Specifies the axis of y along which to interpolate. Interpolation defaults to the last axis of y.

copybool, optional

If True, the class makes internal copies of x and y. If False, references to x and y are used. The default is to copy.

bounds_errorbool, optional

If True, a ValueError is raised any time interpolation is attempted on a value outside of the range of x (where extrapolation is necessary). If False, out of bounds values are assigned fill_value. By default, an error is raised unless fill_value="extrapolate".

fill_valuearray-like or (array-like, array_like) or “extrapolate”, optional
  • if a ndarray (or float), this value will be used to fill in for requested points outside of the data range. If not provided, then the default is NaN. The array-like must broadcast properly to the dimensions of the non-interpolation axes.

  • If a two-element tuple, then the first element is used as a fill value for x_new < x[0] and the second element is used for x_new > x[-1]. Anything that is not a 2-element tuple (e.g., list or ndarray, regardless of shape) is taken to be a single array-like argument meant to be used for both bounds as below, above = fill_value, fill_value.

    New in version 0.17.0.

  • If “extrapolate”, then points outside the data range will be extrapolated.

    New in version 0.17.0.

assume_sortedbool, optional

If False, values of x can be in any order and they are sorted first. If True, x has to be an array of monotonically increasing values.

Attributes: fill_value

Methods: __call__

See Also: splrep, splev

Spline interpolation/smoothing based on FITPACK.

UnivariateSpline : An object-oriented wrapper of the FITPACK routines. interp2d : 2-D interpolation

Calling interp1d with NaNs present in input values results in undefined behaviour.

Input values x and y must be convertible to float values like int or float.

Examples: >>> import matplotlib.pyplot as plt >>> from scipy import interpolate >>> x = np.arange(0, 10) >>> y = np.exp(-x/3.0) >>> f = interpolate.interp1d(x, y)

>>> xnew = np.arange(0, 9, 0.1)
>>> ynew = f(xnew)   # use interpolation function returned by `interp1d`
>>> plt.plot(x, y, 'o', xnew, ynew, '-')
>>> plt.show()

Initialize a 1-D linear interpolation class.

dtype
class omfit_classes.utils_math.URegularGridInterpolator(points, values, method='linear', bounds_error=True, fill_value=nan, std_pow=2, **kw)[source]

Bases: scipy.interpolate.interpolate.RegularGridInterpolator

Adjusted scipy.interpolate.RegularGridInterpolator (documented below) to interpolate the nominal_values and std_devs of an uncertainty array.

Parameters

std_pow – float. Uncertainty is raised to this power, interpolated, then lowered. (Note std_pow=2 interpolates the variance, which is often used in fitting routines).

Additional arguments and key word arguments are as in RegularGridInterpolator.

Examples

Make some sample 2D data

>> x = np.linspace(0,2*np.pi,30) >> y = np.linspace(0,2*np.pi,30) >> z = np.cos(x[:,np.newaxis]+y[np.newaxis,:]) >> u = unumpy.uarray(np.cos(x[:,np.newaxis]+y[np.newaxis,:]),np.random.rand(*z.shape))

Form interpolator

>> fi = URegularGridInterpolator((x,y),u,std_pow=2)

interpolate along the diagonal of (x,y)

>> xnew = np.linspace(x[0],x[-1],1e3) >> unew = fi(zip(xnew,xnew))

Compare original and interpolated values.

>> f = figure(num=’URegularGridInterpolator example’) >> f.clf() >> ax = f.use_subplot(111) >> uerrorbar(x,u.diagonal()) >> uband(xnew,unew)

Note the interpolated uncertainty between points is curved by std_pow=2. The curve is affected by the uncertainty of nearby off diagonal points (not shown).

RegularGridInterpolator Documentation

Interpolation on a regular grid in arbitrary dimensions

The data must be defined on a regular grid; the grid spacing however may be uneven. Linear and nearest-neighbor interpolation are supported. After setting up the interpolator object, the interpolation method (linear or nearest) may be chosen at each evaluation.

Parameters: points : tuple of ndarray of float, with shapes (m1, ), …, (mn, )

The points defining the regular grid in n dimensions.

valuesarray_like, shape (m1, …, mn, …)

The data on the regular grid in n dimensions.

methodstr, optional

The method of interpolation to perform. Supported are “linear” and “nearest”. This parameter will become the default for the object’s __call__ method. Default is “linear”.

bounds_errorbool, optional

If True, when interpolated values are requested outside of the domain of the input data, a ValueError is raised. If False, then fill_value is used.

fill_valuenumber, optional

If provided, the value to use for points outside of the interpolation domain. If None, values outside the domain are extrapolated.

Methods: __call__

Notes: Contrary to LinearNDInterpolator and NearestNDInterpolator, this class avoids expensive triangulation of the input data by taking advantage of the regular grid structure.

If any of points have a dimension of size 1, linear interpolation will return an array of nan values. Nearest-neighbor interpolation will work as usual in this case.

New in version 0.14.

Examples: Evaluate a simple example function on the points of a 3-D grid:

>>> from scipy.interpolate import RegularGridInterpolator
>>> def f(x, y, z):
...     return 2 * x**3 + 3 * y**2 - z
>>> x = np.linspace(1, 4, 11)
>>> y = np.linspace(4, 7, 22)
>>> z = np.linspace(7, 9, 33)
>>> data = f(*np.meshgrid(x, y, z, indexing='ij', sparse=True))

data is now a 3-D array with data[i,j,k] = f(x[i], y[j], z[k]). Next, define an interpolating function from this data:

>>> my_interpolating_function = RegularGridInterpolator((x, y, z), data)

Evaluate the interpolating function at the two points (x,y,z) = (2.1, 6.2, 8.3) and (3.3, 5.2, 7.1):

>>> pts = np.array([[2.1, 6.2, 8.3], [3.3, 5.2, 7.1]])
>>> my_interpolating_function(pts)
array([ 125.80469388,  146.30069388])

which is indeed a close approximation to [f(2.1, 6.2, 8.3), f(3.3, 5.2, 7.1)].

See also: NearestNDInterpolator : Nearest neighbor interpolation on unstructured

data in N dimensions

LinearNDInterpolatorPiecewise linear interpolant on unstructured data

in N dimensions

References: .. [1] Python package regulargrid by Johannes Buchner, see

2

Wikipedia, “Trilinear interpolation”, https://en.wikipedia.org/wiki/Trilinear_interpolation

3

Weiser, Alan, and Sergio E. Zarantonello. “A note on piecewise linear and multilinear table interpolation in many dimensions.” MATH. COMPUT. 50.181 (1988): 189-196. https://www.ams.org/journals/mcom/1988-50-181/S0025-5718-1988-0917826-0/S0025-5718-1988-0917826-0.pdf

omfit_classes.utils_math.cumtrapz(y, x=None, dx=1.0, axis=- 1, initial=0)[source]

This is a convenience function for scipy.integrate.cumtrapz. Notice that here initial=0 which is what one most often wants, rather than the initial=None, which is the default for the scipy function.

Cumulatively integrate y(x) using the composite trapezoidal rule. This is the right way to integrated derivatives quantities which were calculated with gradient. If a derivative was obtained with the diff command, then the cumsum command should be used for its integration.

Parameters
  • y – Values to integrate.

  • x – The coordinate to integrate along. If None (default), use spacing dx between consecutive elements in y.

  • dx – Spacing between elements of y. Only used if x is None

:param axis : Specifies the axis to cumulate. Default is -1 (last axis).

:param initialIf given, uses this value as the first value in the returned result.

Typically this value should be 0. If None, then no value at x[0] is returned and the returned array has one element less than y along the axis of integration.

Returns

The result of cumulative integration of y along axis. If initial is None, the shape is such that the axis of integration has one less value than y. If initial is given, the shape is equal to that of y.

omfit_classes.utils_math.deriv(x, y)[source]

This function returns the derivative of the 2nd order lagrange interpolating polynomial of y(x) When re-integrating, to recover the original values y use cumtrapz(dydx,x)

Parameters
  • x – x axis array

  • y – y axis array

Returns

dy/dx

omfit_classes.utils_math.reverse_enumerate(l)[source]
omfit_classes.utils_math.factors(n)[source]

get all the factors of a number

> print(list(factors(100)))

omfit_classes.utils_math.greatest_common_delta(input_int_array)[source]

Given an array of integers it returns the greatest uniform delta step

Parameters

input_int_array – array of integers

Returns

greatest uniform delta step

omfit_classes.utils_math.mad(x, axis=- 1, std_norm=True)[source]

Median absolute deviation, defined as 1.4826 * np.median(np.abs(np.median(x)-x))

Parameters
Returns

Median absolute deviation of the input data

omfit_classes.utils_math.mad_outliers(data, m=3.0, outliers_or_valid='valid')[source]

Function to identify outliers data based on median absolute deviation (mad) distance. Note: used median absolute deviation defined as 1.4826 * np.median(np.abs(np.median(x)-x))

Parameters
  • data – input data array (if a dictionary of arrays, the mad_outliers function is applied to each of the values in the dictionary)

  • m – mad distance multiplier from the median after which a point is considered an outlier

  • outliers_or_valid – return valid/outlier points (valid is default)

Returns

boolean array indicating which data points are a within m mad from the median value (i.e. the valid points)

omfit_classes.utils_math.bin_outliers(data, mincount, nbins, outliers_or_valid='valid')[source]

Function to identify outliers data based on binning of data. The algorythm bins the data in nbins and then considers valid data only the data that falls within the bins that have at least mincount counts.

Parameters
  • data – input data array (if a dictionary of arrays, the bin_outliers function is applied to each of the values in the dictionary)

  • mincount – minimum number of counts within a bin for data to be considered as valid

  • nbins – number of bins for binning of data

  • outliers_or_valid – return valid/outlier points (valid is default)

Returns

boolean array indicating which data points are a valid or not

omfit_classes.utils_math.powerlaw_fit(x, y)[source]

evaluates multidimensional power law for y based on inputs x

Parameters
  • x – 2D array of inputs [N,M]

  • y – 1D array of output [M]

Returns

power law coefficient p, fitting function f(p,x)

omfit_classes.utils_math.exp_no_overflow(arg, factor=1.0, minpad=1000.0, extra_pad=1000.0, return_big_small=False)[source]

Performs exp(arg) but first limits the value of arg to prevent floating math errors. Checks sys.float_info to so it can avoid floating overflow or underflow. Can be informed of factors you plan on multiplying with the exponential result later in order to make the limits more restrictive (the limits are “padded”) and avoid over/underflow later on in your code.

Parameters
  • arg – Argument of exponential function

  • factor – Factor that might be multiplied in to exponential function later. Adjusts limits to make them more restrictive and prevent overflow later on. The adjustment to limits is referred to as padding.

  • minpad – Force the padding to be at least a certain size.

  • extra_pad – Extra padding beyond what’s determined by minpad and factor

  • return_big_small – T/F: flag to just return big and small numbers. You may be able to speed up execution in repeated calls by getting appropriate limits, doing your own cropping, and looping exp() instead of looping exp_no_overflow(). Even in this case, exp_no_overflow() can help you pick good limits. Or if you don’t have time for any of that, you can probably use -70 and 70 as the limits on arg, which will get you to order 1e30.

Returns

exp(arg) with no math errors, or (big, small) if return_big_small is set

omfit_classes.utils_math.dimens(x)[source]

Returns the number of dimensions in a mixed list/array object. Handles mutli-level nested iterables.

From Craig Burgler on stack exchange https://stackoverflow.com/a/39255678/6605826

This would be useful, for example on a case like: x = [np.array([1, 2, 3]), np.array([2, 5, 10, 20])]

Parameters

x – iterable (maybe nested iterable)

Returns

int Generalized dimension count, considering that some elements in a list might be iterable and others not.

omfit_classes.utils_math.safe_divide(numerator, denominator, fill_value=0)[source]

Division function to safely compute the ratio of two lists/arrays. The fill_value input parameter specifies what value should be filled in for the result whenever the denominator is 0.

Parameters
  • numerator – numerator of the division

  • denominator – denominator of the division

  • fill_value – fill value when denominator is 0

Returns

division with fill_value where nan or inf would have been instead

omfit_classes.utils_math.nannanargmin(x, axis=1)[source]

Performs nanargmin along an axis on a 2D array while dodging errors from empty rows or columns

argmin finds the index where the minimum value occurs. nanargmin ignores NaNs while doing this.

However, if nanargmin is operating along just one axis and it encounters a row that’s all NaN, it raises a ValueError, because there’s no valid index for that row. It can’t insert NaN into the result, either, because the result should be an integer array (or else it couldn’t be used for indexing), and NaN is a float. This function is for cases where we would like nanargmin to give valid results where possible and clearly invalid indices for rows that are all NaN. That is, it returns -N, where N is the row length, if the row is all NaN.

Parameters
  • x – 2D float array Input data to process

  • axis – int 0 or 1

Returns

1D int array indices of the minimum value of each row or column. Rows/columns which are all NaN will be have -N, where N is the size of the relevant dimension (so -N is invalid).

omfit_classes.utils_math.calcz(x, y, consistent_reconstruction=True)[source]

Calculate Z: the inverse normalized scale-length The function is coded in such a way to avoid NaN and Inf where y==0

z = -dy/dx/y

Parameters
  • x – x axis array

  • y – y axis array

  • consistent_reconstruction – calculate z so that integration of z with integz exactly generates original profile

>> z_cF =calcz(rho,ne,consistent_reconstruction=False) >> z_cT=calcz(rho,ne,consistent_reconstruction=True) >> pyplot.plot(rho,ne) >> pyplot.plot(rho,integz(rho,z_cT,rho[0],ne[0],rho),’.’) >> pyplot.plot(rho,integz(rho,z_cF,rho[0],ne[0],rho),’x’)

Returns

z calculated at x

omfit_classes.utils_math.mergez(x0, z0, x1, z1, core, nml, edge, x)[source]

Perform merge of two Z profiles

  • rho < core: Z goes linearly to 0 on axis

  • core < rho < nml: profile 0

  • nml < rho < edge: linearly connect Z at nml and edge

  • rho > edge: profile 1

Parameters
  • x0 – abscissa of first profile

  • z0 – values of first Z profile

  • x1 – abscissa of second profile

  • z1 – values of second Z profile

  • core – abscissa of core boundary

  • nml – abscissa of nml boundary

  • edge – abscissa of edge boundary

Param

returns tuple of merged x and z points

omfit_classes.utils_math.integz(x0, z0, xbc, ybc, x, clipz=True)[source]

Integrate inverse scale-length Z to get profile

Parameters
  • x0 – abscissa of the Z profile

  • z0 – Z profile

  • xbc – abscissa of the boundary condition

  • ybc – value of the boundary condition

  • x – integrate the Z profile at these points

  • clipz – use clipped z for extrapolation (alternatively linear)

Returns

integrated Z profile at points defined by x

omfit_classes.utils_math.detect_peaks(x, mph=None, mpd=1, threshold=0, edge='rising', kpsh=False, valley=False, show=False, ax=None)[source]

Detect peaks in data based on their amplitude and other features.

x1D array_like

data.

mph{None, number}, optional (default = None)

detect peaks that are greater than minimum peak height.

mpdpositive integer, optional (default = 1)

detect peaks that are at least separated by minimum peak distance (in number of data).

thresholdpositive number, optional (default = 0)

detect peaks (valleys) that are greater (smaller) than threshold in relation to their immediate neighbors.

edge{None, ‘rising’, ‘falling’, ‘both’}, optional (default = ‘rising’)

for a flat peak, keep only the rising edge (‘rising’), only the falling edge (‘falling’), both edges (‘both’), or don’t detect a flat peak (None).

kpshbool, optional (default = False)

keep peaks with same height even if they are closer than mpd.

valleybool, optional (default = False)

if True (1), detect valleys (local minima) instead of peaks.

showbool, optional (default = False)

if True (1), plot data in matplotlib figure.

ax : a matplotlib.axes.Axes instance, optional (default = None).

ind1D array_like

indeces of the peaks in x.

The detection of valleys instead of peaks is performed internally by simply negating the data: ind_valleys = detect_peaks(-x)

The function can handle NaN’s

See this IPython Notebook 1.

1

http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/DetectPeaks.ipynb

>> from detect_peaks import detect_peaks >> x = np.random.randn(100) >> x[60:81] = np.nan >> # detect all peaks and plot data >> ind = detect_peaks(x, show=True) >> print(ind)

>> x = np.sin(2*np.pi*5*np.linspace(0, 1, 200)) + np.random.randn(200)/5. >> # set minimum peak height = 0 and minimum peak distance = 20 >> detect_peaks(x, mph=0, mpd=20, show=True)

>> x = [0, 1, 0, 2, 0, 3, 0, 2, 0, 1, 0] >> # set minimum peak distance = 2 >> detect_peaks(x, mpd=2, show=True)

>> x = np.sin(2*np.pi*5*np.linspace(0, 1, 200)) + np.random.randn(200)/5. >> # detection of valleys instead of peaks >> detect_peaks(x, mph=0, mpd=20, valley=True, show=True)

>> x = [0, 1, 1, 0, 1, 1, 0] >> # detect both edges >> detect_peaks(x, edge=’both’, show=True)

>> x = [-2, 1, -2, 2, 1, 1, 3, 0] >> # set threshold = 2 >> detect_peaks(x, threshold = 2, show=True)

omfit_classes.utils_math.find_feature(y, x=None, M=0.01, k=5.0, xmin=None, xmax=None, retall=False)[source]

Identify edges and center of a sharp feature (pedestal, peak, or well) in a otherwise smooth profile.

Parameters
  • y – np.ndarray. Parameter of interest, such as T_e

  • x – np.ndarray. Position basis, such as psi_N (Default is np.arange(len(y))

  • M – float. Gaussian smoothing sigma in units of position basis

  • k – float. Difference of gaussians factor (second smoothing sigma is M*k)

  • xmin – float. Lower limit on range of X values to consider in search

  • xmax – float. Upper limit on range of X values to consider in search

  • retall – bool. Return gaussian smoothed functions

:returns : ( Inner edge, outer edge, center, value at inner edge, value at outer edge )

Examples

Here, is a simple 1D example using find_feature to identify the steep gradient region of a tanh.

>> x = np.linspace(0,1,1000) >> center = 0.8 >> halfwidth = 0.05 >> y = np.tanh((center-x)/halfwidth)+1 >> dydx = deriv(x,y) >> indx = find_feature(dydx,x=x) >> xpt = x[indx] >> ypt = dydx[indx]

>> foundwidth = x[indx[2]]-x[indx[0]] >> print(“Width within {:.2f}%”.format((1-2*halfwidth/foundwidth)*100))

Note we chose M to set the appropriate scale of interest for the above problem. We can see the sensitivity to this by scanning the tanh width in a 2D example.

>> xs = x.reshape(1,-1) >> halfwidths = np.linspace(0.01,0.1,50).reshape(-1,1) >> ys = np.tanh((center-xs)/halfwidths)+1 >> dydxs = np.gradient(ys)[1]/np.gradient(x)

M sets the scale for the steepness of interest using one M for a range of bump sizes isolates only the scale of interest >> indxs = np.apply_along_axis(find_feature,1,dydxs,x=x,M=0.01,k=5) Tracking the scale of the bump with M approximates tanh widths >> #indxs = [find_feature(dy,x=x,M=2*hw/10.,k=5) for dy,hw in zip(dydxs,halfwidths)]

>> # found peak and edge points of steep gradient region >> foundwidths = map(lambda indx: x[indx[2]]-x[indx[0]], indxs) >> xpts = [x[i] for i in indxs] >> ypts = [yy[i] for yy,i in zip(ys,indxs)] >> dypts= [yy[i] for yy,i in zip(dydxs,indxs)]

>> # tanh half width points >> # Note np.tanh(1) = 0.76, and d/dxtanh = sech^2 = 0.42 >> ihws = np.array([(np.abs(x-center-hw).argmin(), >> np.abs(x-center+hw).argmin()) for hw in halfwidths[:,0]]) >> xhws = [x[i] for i in ihws] >> yhws = [yy[i] for yy,i in zip(ys,ihws)] >> dyhws= [yy[i] for yy,i in zip(dydxs,ihws)]

Visualize the comparison between tanh widths and the identified region of steep gradient >> close(‘all’) >> f,ax = pyplot.subplots(3,1) >> ax[0].set_ylabel(‘y = np.tanh((c-x)2/w)+1’) >> ax[0].set_xlabel(‘x’) >> ax[1].set_ylabel(‘dy/dx’) >> ax[1].set_xlabel(‘x’) >> for i in [0,24,49]: … l, = ax[0].plot(x,ys[i]) … w1, = ax[0].plot(xhws[i],yhws[i],marker=’o’,ls=’’,fillstyle=’none’,color=l.get_color()) … w2, = ax[0].plot(xpts[i],ypts[i],marker=’x’,ls=’’,fillstyle=’none’,color=l.get_color()) … l, = ax[1].plot(x,dydxs[i]) … w1, = ax[1].plot(xhws[i],dyhws[i],marker=’o’,ls=’’,fillstyle=’none’,color=l.get_color()) … w2, = ax[1].plot(xpts[i],dypts[i],marker=’x’,ls=’’,fillstyle=’none’,color=l.get_color()) >> ax[-1].set_ylabel(‘Edge Width’) >> ax[-1].set_xlabel(‘Tanh Width’) >> ax[-1].plot([0,2*halfwidths[-1,0]],[0,2*halfwidths[-1,0]]) >> ax[-1].plot(2*halfwidths[:,0],foundwidths,marker=’o’,lw=0)

omfit_classes.utils_math.parabola(x, y)[source]

y = a*x^2 + b*x + c

Returns

a,b,c

omfit_classes.utils_math.parabolaMax(x, y, bounded=False)[source]

Calculate a parabola through x,y, then return the extremum point of the parabola

Parameters
  • x – At least three abcissae points

  • y – The corresponding ordinate points

  • bounded – False, ‘max’, or ‘min’ - False: The extremum is returned regardless of location relative to x (default) - ‘max’ (‘min’): The extremum location must be within the bounds of x, and if not return the location and value of max(y) (min(y))

Returns

x_max,y_max - The location and value of the extremum

omfit_classes.utils_math.parabolaCycle(X, Y, ix)[source]
omfit_classes.utils_math.parabolaMaxCycle(X, Y, ix, bounded=False)[source]

Calculate a parabola through X[ix-1:ix+2],Y[ix-1:ix+2], with proper wrapping of indices, then return the extremum point of the parabola

Parameters
  • X – The abcissae points: an iterable to be treated as periodic

  • y – The corresponding ordinate points

  • ix – The index of X about which to find the extremum

  • bounded – False, ‘max’, or ‘min’ - False: The extremum is returned regardless of location relative to x (default) - ‘max’ (‘min’): The extremum location must be within the bounds of x, and if not return the location and value of max(y) (min(y))

Returns

x_max,y_max - The location and value of the extremum

omfit_classes.utils_math.paraboloid(x, y, z)[source]

z = ax*x^2 + bx*x + ay*y^2 + by*y + c

NOTE: This function uses only the first 5 points of the x, y, z arrays to evaluate the paraboloid coefficients

Returns

ax,bx,ay,by,c

omfit_classes.utils_math.smooth(x, window_len=11, window='hann', axis=0)[source]

smooth the data using a window with requested size.

This method is based on the convolution of a scaled window with the signal. The signal is prepared by introducing reflected copies of the signal (with the window size) in both ends so that transient parts are minimized in the beginning and end part of the output signal.

input:
param x

the input signal

param window_len

the dimension of the smoothing window; should be an odd integer; is ignored if window is an array

param window

the window function to use, see scipy.signal.get_window documentation for list of available windows ‘flat’ or ‘boxcar’ will produce a moving average smoothing Can also be an array, in which case it is used as the window function and window_len is ignored

param axis

The axis that is smoothed

output:

the smoothed signal

Examples

>> t=np.linspace(-2,2,100) >> x=np.sin(t)+randn(len(t))*0.1 >> y=smooth(x,11)

see also:

scipy.signal.get_window, np.convolve, scipy.signal.lfilter

omfit_classes.utils_math.smoothIDL(data, n_smooth=3, timebase=None, causal=False, indices=False, keep=False)[source]

An efficient top hat smoother based on the smooth IDL routine. The use of cumsum-shift(cumsum) means that execution time is 2xN flops compared to 2 x n_smooth x N for a convolution. If supplied with a timebase, the shortened timebase is returned as the first of a tuple.

Parameters
  • data – (timebase, data) is a shorthand way to pass timebase

  • n_smooth – smooth bin size

  • timebase – if passed, a tuple with (timebase,smoothed_data) gets returned

  • causal – If True, the smoothed signal never preceded the input, otherwise, the smoothed signal is “centred” on the input (for n_smooth odd) and close (1//2 timestep off) for even

  • indices – if True, return the timebase indices instead of the times

  • keep – Better to throw the partially cooked ends away, but if you want to keep them use keep=True. This is useful for quick filtering applications so that original and filtered signals are easily compared without worrying about timebase

>> smoothIDL([1,2,3,4],3) np.array([ 2., 3.]) >> smoothIDL([1.,2.,3.,4.,5.],3) np.array([ 2., 3., 4.]) >> smoothIDL([1,2,3,4,5],timebase=np.array([1,2,3,4,5]),n_smooth=3, causal=False) (np.array([2, 3, 4]), np.array([ 2., 3., 4.])) >> smoothIDL([0,0,0,3,0,0,0],timebase=[1,2,3,4,5,6,7],n_smooth=3, causal=True) ([3, 4, 5, 6, 7], np.array([ 0., 1., 1., 1., 0.])) >> smoothIDL([0,0,0,3,0,0,0],timebase=[1,2,3,4,5,6,7],n_smooth=3, causal=True, indices=True) ([2, 3, 4, 5, 6], np.array([ 0., 1., 1., 1., 0.])) >> smoothIDL([0, 0, 0, 0, 5, 0, 0, 0, 0, 0, 0], 5, keep=1) np.array([ 0., 0., 1., 1., 1., 1., 1., 0., 0., -1., -1.])

omfit_classes.utils_math.pcs_rc_smooth(xx, yy, s)[source]

RC lowpass smoothing filter using same formula as in the PCS. Can be used to reproduce PCS lowpass filtering.

Parameters
  • xx – array Independent variable (like time). Does not have to be evenly spaced.

  • yy – array matching xx Depending variable

  • s – float Smoothing time constant in units matching xx

Returns

array Smoothed/filtered version of yy

omfit_classes.utils_math.get_time_windows_for_causal_smooth(t_base, t_target, t_window)[source]
omfit_classes.utils_math.causal_smooth_equidistant(t, y, t_window, t_mask=None, t_target=None, return_full_filtered=False)[source]
omfit_classes.utils_math.smooth_by_convolution(yi, xi=None, xo=None, window_size=None, window_function='gaussian', axis=0, causal=False, interpolate=False, std_dev=2)[source]

Convolution of a non-uniformly discretized array with window function.

The output values are np.nan where no points are found in finite windows (weight is zero). The gaussian window is infinite in extent, and thus returns values for all xo.

Supports uncertainties arrays. If the input –does not– have associated uncertainties, then the output will –not– have associated uncertainties.

Parameters
  • yi – array_like (…,N,…). Values of input array

  • xi – array_like (N,). Original grid points of input array (default y indicies)

  • xo – array_like (M,). Output grid points of convolution array (default xi)

  • window_size – float. Width of passed to window function (default maximum xi step). For the Gaussian, sigma=window_size/4. and the convolution is integrated across +/-4.*sigma.

  • window_function – str/function. Accepted strings are ‘hanning’,’bartlett’,’blackman’,’gaussian’, or ‘boxcar’. Function should accept x and window_size as arguments and return a corresponding weight.

  • axis – int. Axis of y along which convolution is performed

  • causal – int. Forces f(x>0) = 0.

  • interpolate – False or integer number > 0 Paramter indicating to interpolate data so that there are`interpolate` number of data points within a time window. This is useful in presence of sparse data, which would result in stair-case output if not interpolated. The integer value sets the # of points per window size.

  • std_dev – str/int Accepted strings are ‘none’, ‘propagate’, ‘population’, ‘expand’, ‘deviation’, ‘variance’. Only ‘population’ and ‘none’ are valid if yi is not an uncertainties array (i.e. std_devs(yi) is all zeros). Setting to an integer will convolve the error uncertainties to the std_dev power before taking the std_dev root. std_dev = ‘propagate’ is true propagation of errors (slow if not interpolating) std_dev = ‘population’ is the weighted “standard deviation” of the points themselves (strictly correct for the boxcar window) std_dev = ‘expand’ is propagation of errors weighted by w~1/window_function std_dev = ‘deviation’ is equivalent to std_dev=1 std_dev = ‘variance’ is equivalent to std_dev=2

Returns

convolved array on xo

>> M=300 >> ninterp = 10 >> window=[‘hanning’,’gaussian’,’boxcar’][1] >> width=0.05 >> f = figure(num=’nu_conv example’) >> f.clf() >> ax = f.use_subplot(111) >> >> xo=np.linspace(0,1,1000) >> >> x0=xo >> y0=(x0>0.25)&(x0<0.75) >> pyplot.plot(x0,y0,’b-‘,label=’function’) >> >> x1=np.cumsum(rand(M)) >> x1=(x1-min(x1))/(max(x1)-min(x1)) >> y1=(x1>0.25)&(x1<0.75) >> pyplot.plot(x1,y1,’b.’,ms=10,label=’subsampled function’) >> if window==’hanning’: >> ys=smooth(interp1e(x0,y0)(xo),int(len(xo)*width)) >> pyplot.plot(xo,ys,’g-‘,label=’smoothed function’) >> yc=smooth(interp1e(x1,y1)(xo),int(len(xo)*width)) >> pyplot.plot(xo,yc,’m–’,lw=2,label=’interp subsampled then convolve’) >> >> y1=unumpy.uarray(y1,y1*0.1) >> a=time.time() >> yo=nu_conv(y1,xi=x1,xo=xo,window_size=width,window_function=window,std_dev=’propagate’,interpolate=ninterp) >> print(‘nu_conv time: {:}’.format(time.time()-a)) >> ye=nu_conv(y1,xi=x1,xo=xo,window_size=width,window_function=window,std_dev=’expand’,interpolate=ninterp) >> >> uband(x1,y1,color=’b’,marker=’.’,ms=10) >> uband(xo,yo,color=’r’,lw=3,label=’convolve subsampled’) >> uband(xo,ye,color=’c’,lw=3,label=’convolve with expanded-error propagation’) >> >> legend(loc=0) >> pyplot.ylim([-0.1,1.1]) >> pyplot.title(‘%d points , %s window, %3.3f width, interpolate %s’%(M,window,width, ninterp))

omfit_classes.utils_math.trismooth(*args, **kw)[source]

Smooth with triangle kernel, designed to mimic smooth in reviewplus :param args:

y: float array, y, window_size: float array, float y, x, window_size: float array, float array, float OR y, window_size[optional] as OMFITmdsValue, float If x or window_size are not defined, smooth_by_convolution will pick values automatically. window_size is the half width fo the kernel.

The default window_size set by smooth_by_convolution will probably be too small, unless you are providing a much higher resolution output grid with xo. You should probably set window_size by providing at least two arguments.

Parameters

kw – Keywords passed to smooth_by_convolution.

Returns

float array or uarray or tuple of arrays ysmo = result from smooth_by_convolution; array (default) or uarray (std_dev is modified) if OMFITmdsValue:

(x, ysmo)

else:

ysmo

omfit_classes.utils_math.cicle_fourier_smooth(R, Z, keep_M_harmonics, equalAngle=False, doPlot=False)[source]

smooth closed periodic boundary by zeroing out high harmonics components

Parameters
  • R – x coordinates of the boundary

  • Z – y coordinates of the boundary

  • keep_M_harmonics – how many harmonics to keep

  • equalAngle – use equal angle interpolation, and if so, how many points to use

  • doPlot – plot plasma boundary before and after

Returns

smoothed R and Z coordinates

omfit_classes.utils_math.nu_conv(yi, xi=None, xo=None, window_size=None, window_function='gaussian', axis=0, causal=False, interpolate=False, std_dev=2)

Convolution of a non-uniformly discretized array with window function.

The output values are np.nan where no points are found in finite windows (weight is zero). The gaussian window is infinite in extent, and thus returns values for all xo.

Supports uncertainties arrays. If the input –does not– have associated uncertainties, then the output will –not– have associated uncertainties.

Parameters
  • yi – array_like (…,N,…). Values of input array

  • xi – array_like (N,). Original grid points of input array (default y indicies)

  • xo – array_like (M,). Output grid points of convolution array (default xi)

  • window_size – float. Width of passed to window function (default maximum xi step). For the Gaussian, sigma=window_size/4. and the convolution is integrated across +/-4.*sigma.

  • window_function – str/function. Accepted strings are ‘hanning’,’bartlett’,’blackman’,’gaussian’, or ‘boxcar’. Function should accept x and window_size as arguments and return a corresponding weight.

  • axis – int. Axis of y along which convolution is performed

  • causal – int. Forces f(x>0) = 0.

  • interpolate – False or integer number > 0 Paramter indicating to interpolate data so that there are`interpolate` number of data points within a time window. This is useful in presence of sparse data, which would result in stair-case output if not interpolated. The integer value sets the # of points per window size.

  • std_dev – str/int Accepted strings are ‘none’, ‘propagate’, ‘population’, ‘expand’, ‘deviation’, ‘variance’. Only ‘population’ and ‘none’ are valid if yi is not an uncertainties array (i.e. std_devs(yi) is all zeros). Setting to an integer will convolve the error uncertainties to the std_dev power before taking the std_dev root. std_dev = ‘propagate’ is true propagation of errors (slow if not interpolating) std_dev = ‘population’ is the weighted “standard deviation” of the points themselves (strictly correct for the boxcar window) std_dev = ‘expand’ is propagation of errors weighted by w~1/window_function std_dev = ‘deviation’ is equivalent to std_dev=1 std_dev = ‘variance’ is equivalent to std_dev=2

Returns

convolved array on xo

>> M=300 >> ninterp = 10 >> window=[‘hanning’,’gaussian’,’boxcar’][1] >> width=0.05 >> f = figure(num=’nu_conv example’) >> f.clf() >> ax = f.use_subplot(111) >> >> xo=np.linspace(0,1,1000) >> >> x0=xo >> y0=(x0>0.25)&(x0<0.75) >> pyplot.plot(x0,y0,’b-‘,label=’function’) >> >> x1=np.cumsum(rand(M)) >> x1=(x1-min(x1))/(max(x1)-min(x1)) >> y1=(x1>0.25)&(x1<0.75) >> pyplot.plot(x1,y1,’b.’,ms=10,label=’subsampled function’) >> if window==’hanning’: >> ys=smooth(interp1e(x0,y0)(xo),int(len(xo)*width)) >> pyplot.plot(xo,ys,’g-‘,label=’smoothed function’) >> yc=smooth(interp1e(x1,y1)(xo),int(len(xo)*width)) >> pyplot.plot(xo,yc,’m–’,lw=2,label=’interp subsampled then convolve’) >> >> y1=unumpy.uarray(y1,y1*0.1) >> a=time.time() >> yo=nu_conv(y1,xi=x1,xo=xo,window_size=width,window_function=window,std_dev=’propagate’,interpolate=ninterp) >> print(‘nu_conv time: {:}’.format(time.time()-a)) >> ye=nu_conv(y1,xi=x1,xo=xo,window_size=width,window_function=window,std_dev=’expand’,interpolate=ninterp) >> >> uband(x1,y1,color=’b’,marker=’.’,ms=10) >> uband(xo,yo,color=’r’,lw=3,label=’convolve subsampled’) >> uband(xo,ye,color=’c’,lw=3,label=’convolve with expanded-error propagation’) >> >> legend(loc=0) >> pyplot.ylim([-0.1,1.1]) >> pyplot.title(‘%d points , %s window, %3.3f width, interpolate %s’%(M,window,width, ninterp))

class omfit_classes.utils_math.FilteredNDInterpolator(points, values, size_scales=None, check_finite=False, std_dev=True, filter_type='median')[source]

Bases: object

A class for smoothing ND data. Useful for down-sampling and gridding.

Mean or median filter interpolate in N dimensions. Calculates the mean or median of all values within a size_scale “sphere” of the interpolation point. Unlike linear interpolation, you can incorporate information from all your data when down-sampling to a regular grid.

Of course, it would be better to do a weight function (gaussian, hanning, etc) convolution, but the “volume” elements required for integration get very computationally expensive in multiple dimensions. In that case, try linear interpolation followed by regular-grid convolution (ndimage processing).

Parameters
  • points – np.ndarray of floats shape (M, D), where D is the number of dimensions

  • values – np.ndarray of floats shape (M,)

  • size_scales – tuple (D,) scales of interest in each dimension

  • check_finite – bool. Cleans non-finite points and/or values prior to interpolation

  • std_dev – bool. Estimate uncertainty of mean or median interpolation

  • filter_type – str. Accepts ‘mean’ or ‘median’

Examples

This example shows the interpolator is reasonably fast, but note it is nowhere near as scalable to 100k+ points as linear interpolation + ndimage processing.

>> ngrid = 100 >> width = (0.05, 0.05) >> noise_factor = 0.5

>> def func(x, y): >> return x * (1 - x) * np.cos(4 * np.pi * x) * np.sin(4 * np.pi * y ** 2) ** 2

>> x = np.linspace(0, 1, ngrid) >> y = np.linspace(0, 1, ngrid) >> grid_x, grid_y = np.meshgrid(x, y) >> xy = np.array(np.meshgrid(x, y)).T.reshape(-1, 2)

>> fig, axs = pyplot.subplots(3, 2, sharex=True, sharey=True, figsize=(8, 9)) >> axs[0, 0].set_title(‘Original’) >> data = func(grid_x, grid_y) >> im = axs[0, 0].imshow(data, extent=(0, 1, 0, 1), origin=’lower’, aspect=’auto’) >> kw = dict(extent=(0, 1, 0, 1), origin=’lower’, aspect=’auto’, clim=im.get_clim())

>> axs[0, 1].set_title(‘Noisy’) >> data = data + (np.random.random(data.shape) - 0.5) * np.ptp(data) * noise_factor >> im = axs[0, 1].imshow(data, **kw )

>> ns = [400, 1600, 6400] >> times, nums = [], [] >> for npts, axi in zip(ns, axs[1:]): >> print(‘npts = {:}’.format(npts)) >> points = np.random.rand(npts, 2) >> values = func(points[:, 0], points[:, 1]) >> values = values + (np.random.random((npts,)) - 0.5) * np.ptp(values) * noise_factor

>> strt = datetime.datetime.now() >> # ci = ConvolutionNDInterpolator(points, values, xi=xy, window_size=width, window_function=’boxcar’) >> ci = FilteredNDInterpolator(points, values, size_scales=width, filter_type=’mean’)(grid_x, grid_y) >> ctime = datetime.datetime.now() - strt >> print(‘ > Convolution took {:}’.format(ctime)) >> times.append(ctime.total_seconds()) >> nums.append(npts)

>> ax = axi[1] >> ax.set_title(‘Interpolated {:} pts’.format(npts)) >> yi = LinearNDInterpolator(points, values)(grid_x, grid_y) >> im = ax.imshow(nominal_values(yi), **kw) >> l, = ax.plot(points[:, 0], points[:, 1], ‘k.’, ms=1)

>> ax = axi[0] >> ax.set_title(‘Convolved {:} pts’.format(npts)) >> im = ax.imshow(nominal_values(ci), **kw) >> l, = ax.plot(points[:, 0], points[:, 1], ‘k.’, ms=1)

>>fig.tight_layout() >>axs[0, 0].set_xlim(0, 1) >>axs[0, 0].set_ylim(0, 1)

>>fig, axs = pyplot.subplots() >>axs.plot([0] + nums, [0] + times, marker=’o’) >>axs.set_xlabel(‘# of points’) >>axs.set_ylabel(‘time (s)’) >>print(‘sec/k ~ {:.3e}’.format(1e3 * np.mean(np.array(times) / ns)))

This example shows that the median filter is “edge preserving” and contrasts both filters with a more sophisticated convolution. Note the boxcar convolution is not identical to the mean filter for low sampling because the integral weights isolated points more than very closely spaced points.

>> npts = 50 >> window = [‘hanning’, ‘gaussian’, ‘boxcar’][1] >> width = 0.05 >> ped = 0.1 >> pedwidth = 0.03 >> fun = lambda x: (0.9-x)*2*(x < 0.9) + 1.0 - np.tanh((x-(1.0-ped))/(0.25*pedwidth))

>> fig, axs = pyplot.subplots(3, sharex=True, sharey=True) >> axs[0].set_title(‘Pedestal width {:}, Filter Length Scale {:}’.format(pedwidth, width)) >> for npts, ax in zip([20, 50, 200], axs): >> x0 = np.linspace(0, 1, 1000)

>> x1 = np.cumsum(rand(npts)) >> x1 = (x1 - min(x1)) / (max(x1) - min(x1)) >> y1 = fun(x1) + np.random.random(x1.shape) * 0.4 * fun(x1) >> y1 = uarray(y1, y1 * 0.1)

>> t0 = datetime.datetime.now() >> yc = nu_conv(y1, xi=x1, xo=x0, window_size=width, window_function=window, std_dev=’propagate’) >> tc = datetime.datetime.now() - t0 >> print(‘nu_conv time: {:}’.format(tc)) >> t0 = datetime.datetime.now() >> yd = FilteredNDInterpolator(x1.reshape(-1, 1), y1, size_scales=width / 2.)(x0) >> td = datetime.datetime.now() - t0 >> print(‘median filter time: {:}’.format(td)) >> t0 = datetime.datetime.now() >> ym = FilteredNDInterpolator(x1.reshape(-1, 1), y1, size_scales=width / 2., filter_type=’mean’)(x0) >> tm = datetime.datetime.now() - t0 >> print(‘median filter time: {:}’.format(td))

>> #ax.plot(x0, fun(x0), label=’function’) >> uband(x0, yc, ax=ax, lw=3, label=’{:} convolution, {:.2f}s’.format(window.title(), tc.total_seconds())) >> uband(x0, ym, ax=ax, lw=3, label=’Mean Filter, {:.2f}s’.format(tm.total_seconds())) >> uband(x0, yd, ax=ax, lw=3, label=’Median Filter, {:.2f}s’.format(td.total_seconds())) >> uerrorbar(x1, y1, ax=ax, markersize=4, ls=’-‘, lw=0.5, label=’{:} data points’.format(npts), color=’k’) >> ax.legend(loc=0, frameon=False) >> ax.set_ylim([-0.1, 5]) >> fig.tight_layout()

exception omfit_classes.utils_math.WeightedAvgBadInput[source]

Bases: ValueError, omfit_classes.exceptions_omfit.doNotReportException

weighted_avg got some invalid values and has halted.

omfit_classes.utils_math.weighted_avg(x, err=None, minerr=None, return_stddev_mean=False, return_stddev_pop=False, nan_policy='ignore')[source]

Weighted average using uncertainty

Propagates errors and calculates weighted standard deviation. While nu_conv does these for a sliding window vs. time, this function is simpler and does calculations for a single mean of an array.

Parameters
  • x – 1D float array The input data to be averaged

  • err – 1D float array Uncertainty in x. Should have the same units as x. Should have the same length as x. Special case: a single value of err will be used to propagate errors for the standard deviation of the mean, but will produce uniform (boring) weights. If no uncertainty is provided (err==None), then uniform weighting is used.

  • minerr – float Put a floor on the uncertainties before calculating weight. This prevents a few points with unreasonbly low error from stealing the calculation. Should be a scalar with same units as x.

  • return_stddev_mean – bool Flag for whether the standard deviation of the mean (propagated uncertainty) should be returned with the mean.

  • return_stddev_pop – bool Flag for whether the standard deviation of the population (weighted standard deviation) should be returned with the mean.

  • nan_policy – str ‘nan’: return NaN if there are NaNs in x or err ‘error’: raise an exception ‘ignore’: perform the calculation on the non-nan elements only (default)

Returns

float or tuple mean if return_stddev_mean = False and return_stddev_pop = False (mean, xpop, xstdm) if return_stddev_mean = True and return_stddev_pop = True (mean, xpop) if return_stddev_mean = False and return_stddev_pop = True (mean, xstdm) if return_stddev_mean = True and return_stddev_pop = False where xstdm and xpop are the propagated error and the weighted standard deviation.

omfit_classes.utils_math.firFilter(time_s, data, cutoff_hz, sharpness=0.5, full_output=False)[source]

Filter signal data with FIR filter (lowpass, highpass)

Parameters
  • time_s – time basis of the data signal in seconds

  • data – data signal

  • cutoff_hz – a list of two elements with low and high cutoff frequencies [lowFreq,highFreq]

  • sharpness – sharpness of the filter (taps of the FIR filter = sample_rate/2. * sharpness)

  • full_output – return filter window and frequency in addition to filtered data

Returns

filtered data or tuple with filtered data and filter window and frequency as a tuple

omfit_classes.utils_math.butter_smooth(xx, yy, timescale=None, cutoff=None, laggy=False, order=1, nan_screen=True, btype='low')[source]

Butterworth smoothing lowpass filter.

Similar to firFilter, but with a notable difference in edge effects (butter_smooth may behave better at the edges of an array in some cases).

Parameters
  • xx – 1D array Independent variable. Should be evenly spaced for best results.

  • yy – array matching dimension of xx

  • timescale – float [specifiy either timescale or cutoff] Smoothing timescale. Units should match xx.

  • cutoff – float [specify either timescale or cutoff] Cutoff frequency. Units should be inverse of xx. (xx in seconds, cutoff in Hz; xx in ms, cutoff in kHz, etc.)

  • laggy – bool True: causal filter: smoothed output lags behind input False: acausal filter: uses information from the future so that smoothed output doesn’t lag input

  • order – int Order of butterworth filter. Lower order filters seem to have a longer tail after the ELM which is helpful for detecting the tail of the ELM.

  • nan_screen – bool Perform smoothing on only the non-NaN part of the array, then pad the result out with NaNs to maintain length

  • btype – string low or high. For smoothing, always choose low. You can do a highpass filter instead of lowpass by choosing high, though.

Returns

array matching dimension of xx Smoothed version of yy

class omfit_classes.utils_math.fourier_boundary(nf, rb, zb, symmetric=False)[source]

Bases: object

The routines to make and plot Fourier representation of a boundary from a list of points

make_fourier(nf, rb, zb)[source]
fourier_coef()[source]
reconstruct_boundary()[source]

Uses the fourier representation and amin, r0 and z0 parameters to reconstruct the boundary

omfit_classes.utils_math.fourier(y, t=None)[source]

Calculate fourier transform

Parameters
  • y – signal

  • t – time basis of the signal (uniformly spaced)

Returns

tuple with Fourier transform of the signal and frequency basis

>> t=r_[0:1:0.1] >> y=np.cos(2*np.pi*t) >> >> Y,W=fourier(y,t) >> >> ax=pyplot.subplot(2,1,1) >> ax.plot(t,y) >> ax.set_xlabel(‘t [s]’) >> ax.set_ylabel(‘$y$’) >> >> ax=pyplot.subplot(2,1,2) >> ax.plot(W/2./np.pi,Y) >> ax.set_xlabel(‘f [Hz]’) >> ax.set_ylabel(‘$Y$’)

omfit_classes.utils_math.windowed_FFT(t, y, nfft='auto', overlap=0.95, hanning_window=True, subtract_mean=True, real_input=None)[source]

Bin data into windows and compute FFT for each. Gives amplitude vs. time and frequency. Useful for making spectrograms.

input:
param t

1D time vector in ms

param y

1D parameter as a function of time

param nfft

Number of points in each FFT bin. More points = higher frequency resolution but lower time resolution.

param overlap

Fraction of window that overlaps previous/next window.

param hanning_window

Apply a Hanning window to y(t) before each FFT.

param subtract_mean

Subtract the average y value (of the entire data set, not individually per window) before calculating FFT

param real_input

T/F: Use rfft instead of fft because all inputs are real numbers. Set to None to decide automatically.

output:
return

spectral density (time,frequency)

return

array of times at the center of each FFT window

return

frequency

return

amplitude(time, positive frequency)

return

power(time, positive frequency)

return

positive-only frequency array

return

nfft (helpful if you let it be set automatically)

more on overlap for windows 0, 1, 2:

overlap = 0.0 : no overlap, window 0 ends where window 1 begins overlap = 0.5 : half overlap, window 0 ends where window 2 begins overlap = 0.99: this is probably as high as you should go. It will look nice and smooth, but will take longer. overlap = 1.0 : FAIL, all the windows would stack on top of each other and infinite windows would be required.

more on nfft:

Set nfft=’auto’ and the function will pick a power of two that should give a reasonable view of the dataset. It won’t choose nfft < 16.

omfit_classes.utils_math.noise_estimator(t, y, cutoff_timescale=None, cutoff_omega=None, window_dt=0, dt_var_thresh=1e-09, avoid_jumps=False, debug=False, debug_plot=None, restrict_cutoff=False)[source]

Estimates uncertainty in a signal by assuming that high frequency (above cutoff frequency) variation is random noise

Parameters
  • t – 1D float array with regular spacing (ms) Time base for the input signal

  • y – 1D float array with length matching t (arbitrary) Input signal value as a function of t

  • cutoff_timescale – float (ms) For a basic RC filter, this would be tau = R*C. Define either this or cutoff_omega.

  • cutoff_omega – float (krad/s) The cutoff angular frequency, above which variation is assumed to be noise. Overrides cutoff_timescale if specified. cutoff_timescale = 1.0/cutoff_omega

  • window_dt – float or None (ms) Window half width for windowed_fft, used in some strategies for time varying noise. If <= 0, one scalar noise estimate is made for the entire range of t, using FFT methods. Set to None to choose window size automatically based on cutoff_timescale. This option does not seem to be as good as the standard method.

  • debug – bool Flag for activating debugging tests, plots (unless debug_plot is explicitly disabled), and reports. Also returns all four estimates instead of just the best one.

  • debug_plot – bool [optional] By default, debug_plot copies the value of debug, but you can set it to False to disable plots and still get other debugging features. Setting debug_plot without debug is not supported and will be ignored.

  • dt_var_thresh – float (fraction) Threshold for variability in dt. t must be evenly spaced, so nominally dt will be a constant. Because of various real world issues, there could be some slight variation. As long as this is small, everything should be fine. You can adjust the threshold manually, but be careful: large variability in the spacing of the time base breaks assumptions in the signal processing techniques used by this method. If std(dt)/mean(dt) exceeds this threshold, an exception will be raised.

  • avoid_jumps – bool Large jumps in signal will have high frequency components which shouldn’t be counted with high frequency noise. You SHOULD pick a time range to avoid these jumps while estimating noise, but if you don’t want to, you can try using this option instead. If this flag is true, an attempt will be made to identify time periods when jumps are happening. The time derivative, dy/dt, is evaluated at times where there are no detected jumps and interpolated back onto the full time base before being integrated to give a new signal. The new signal should have a continuous derivative with no spikes, such that its high frequency component should now be just noise. This will prevent the high frequency components of a jump in y from bleeding into the noise estimate near the jump. The noise estimate during the jump may not be accurate, but you were never going to get a good fix on that, anyway. The big problem that is solved by this option is that the jump causes spurious spectral noise which extends well before and after the jump itself. Cutting the jump out somehow confines the problem to the relatively narrow time range when the jump itself is happening.

  • restrict_cutoff – bool Some versions of scipy throw an error if cutoff_frequency > nyquist frequency, and others do not. If your version hates high frequency cutoffs, set this to True and cutoff will be reduced to nyqusit - df/2.0, where df is the frequency increment of the FFT, if cutoff >= nyquist.

Returns

1D uncertain float array with length matching t, or set of four such arrays with different estimates. Lowpass smoothed y with uncertainty (dimensions and units match input y) The standard estimate is a hilbert envelope of the high frequency part of the signal, times a constant for correct normalization:

ylf = butterworth_lowpass(y, cutoff_frequency)
yhf = butterworth_highpass(y, cutoff_frequency)
uncertainty = smooth(hilbert_env(yhf)) * norm_factor
return = unumpy.uarray(ylf, uncertainty)

where smoothing of the envelope is accomplished by a butterworth lowpass filter using cutoff_frequency. norm_factor = np.sqrt(0.5) = std_dev(sin(x)) There are other estimates (accessible by setting the debug flag) based on the fluctuation amplitude in the windowed FFT above the cutoff frequency.

omfit_classes.utils_math.ihs_opti_transform(x, shift=False, lmbda=None, off=None)[source]

Calculate the optimal coefficient for the inverse hyperbolic sine transformation of the data

Parameters
  • x – input data

  • shift – calculate and return offset coefficients

  • lmbda – initial guess for lmbda parameter

  • off – initial guess for offset parameter

Returns

lmbda and offset coefficients (offset=0 if shift==False)

omfit_classes.utils_math.ihs(x, lmbda=1.0, off=0.0)[source]

Inverse hyperbolic sine transform: y=np.arcsinh(lmbda*(x-off))/lmbda

Parameters
  • x – input data

  • lmbda – transformation coefficient

  • off – offset coefficient

Returns

transformed data

omfit_classes.utils_math.iihs(y, lmbda=1.0, off=0.0)[source]

Inverse of Inverse hyperbolic sine transform: x=np.sinh(y*lmbda)/lmbda+off

Parameters
  • y – transformed data

  • lmbda – transformation coefficient

  • off – offset coefficient

Returns

original input data

omfit_classes.utils_math.array_snippet(a)[source]

Gives a string summarizing array or list contents: either elements [0, 1, 2, …, -3, -2, -1] or all elements :param a: Array, list, or other compatible iterable to summarize :return: String with summary of array, or just str(a) for short arrays

omfit_classes.utils_math.ordinal(n, fmt='%d%s')[source]

Return ordinal of an integer ‘1st’, ‘2nd’, ‘3rd’, ‘4th’, ‘5th’, … Based on: https://stackoverflow.com/a/20007730/6605826

Parameters
  • n – integer

  • fmt – string format to use (“%d%s” is the default)

Returns

string with ordinal number

omfit_classes.utils_math.array_info(inv)[source]
Parameters

inv – input variable (recarray, np.ndarray)

Returns

string summarizing arrays statistics

exception omfit_classes.utils_math.RomanError[source]

Bases: Exception

exception omfit_classes.utils_math.RomanOutOfRangeError[source]

Bases: omfit_classes.utils_math.RomanError

exception omfit_classes.utils_math.RomanNotIntegerError[source]

Bases: omfit_classes.utils_math.RomanError

exception omfit_classes.utils_math.RomanInvalidNumeralError[source]

Bases: omfit_classes.utils_math.RomanError

omfit_classes.utils_math.toRoman(n)[source]

convert integer to Roman numeral

omfit_classes.utils_math.fromRoman(s)[source]

convert Roman numeral to integer

omfit_classes.utils_math.atomic_element(symbol_A=None, symbol=None, name=None, Z=None, Z_ion=None, mass=None, A=None, abundance=None, use_D_T=True, return_most_abundant=True)[source]

Returns dictionary with name, symbol, symbol_A, Z, mass, A, abundance information of all the elements that match a a given query. Most of the information was gathered from: http://www.sisweb.com/referenc/source/exactmas.htm returns dictionary with name, symbol, symbol_A, Z, mass, A, abundance information of all the elements that match a a given query

Parameters
  • symbol_A – string Atomic symbol followed by the mass number in parenthesis eg. H(2) for Deuterium

  • symbol

    string Atomic symbol

    • can be followed by the mass number eg. H2 for Deuterium

    • can be prepreceded the mass number and followed by the ion charge number eg. 2H1 for Deuterium

  • name – string Long name of the atomic element

  • Z – int Atomic number (proton count in nucleus)

  • Z_ion – int Charge number of the ion (if not specified, it is assumed Z_ion = Z)

  • mass – float Mass of the atomic element in AMU For matching, it will be easier to use A

  • A – int Mass of the atomic element rounded to the closest integer

  • abundance – float Abundance of the atomic element as a fraction 0 < abundance <= 1

  • use_D_T – bool Whether to use deuterium and tritium for isotopes of hydrogen

  • return_most_abundant – bool Whether only the most abundant element should be returned for a query that matches multiple isotopes

Returns

dictionary with all the elements that match a query

omfit_classes.utils_math.element_symbol(z, n=None)[source]

Return the element symbol given the charge (and possibly the number of neutrons)

Parameters
  • z – Ion’s charge (number of protons)

  • n – Ion’s number of nucleons

Returns

Element symbol (1 or 2 character string)

Note

If z==-1, then ‘elec’ is returned

omfit_classes.utils_math.element_label(**kw)[source]

Return a LaTeX formatted string with the element’s symbol, charge state as superscript, and optionally mass number as subscript. :param z_n or zn: int

Nuclear charge / atomic number

Parameters
  • or za (z_a) – int [optional] Ionic charge, including any electrons which may be bound. Defaults to displaying fully-stripped ion if not specified (z_a = z_n).

  • zamin – int Minimum for a range of Zs in a bundled charge state (like in SOLPS-ITER)

  • zamax – int Minimum for a range of Zs in a bundled charge state (like in SOLPS-ITER)

  • or am (a) – int [optional] Mass number. Provides additional filter on results.

Returns

string Prettily formatted symbol

omfit_classes.utils_math.z_from_symbol(symbol)[source]

Given an atomic element symbol it returns it charge Z

Parameters

symbol – element symbol

Returns

element charge Z

omfit_classes.utils_math.splinet(t, y, x, tau)[source]

Tension spline evaluator By VICTOR AGUIAR NUMERICAL ANALYSIS OF KINCAID

Parameters
  • t – nodes location

  • y – value at the nodes

  • x – return values at

  • tau – tension

class omfit_classes.utils_math.CLSQTensionSpline(x, y, t, tau=1, w=None, xbounds=(None, None), min_separation=0, xy_constraints=(), xyprime_constraints=(), optimize_knot_type='xy')[source]

Bases: object

Constrained least square tension spline.

Parameters
  • x – np.ndarray. Data grid.

  • y – np.ndarray. Data values.

  • t – int or np.ndarray. Knot number or locations.

  • tau – float. Tension (higher is smoother).

  • w – np.ndarray. Data weights (usually ~1/std_devs)

  • xbounds – tuple. Minimum and maximum x of knot locations.

  • min_separation – float. Minumum separation between knot locations.

  • xy_constraints – list of tuples. Spline is constrained to have these values at these locations.

  • xyprime_constraints – Spline is constrained to have these derivatives as these locations.

  • optimize_knot_type – str. choose ‘xy’ to simultaneously optimize knot (x,y) values, ‘x’ to optimize x and y separately, and ‘y’ to simply use the prescribed knot locations.

Examples

Using the same data from the LSQUnivariateSpline examples,

>> x = np.linspace(-3, 3, 50) >> y = np.exp(-x**2) + 0.1 * np.random.randn(50)

We can fit a tension spline. We can even set some boundary constraints.

>> t = [-1, 0, 1] >> spl = CLSQTensionSpline(x, y, t, tau=0.1, xy_constraints=[(-3,0)]) >> xs = np.linspace(-3, 3, 1000) >> pyplot.subplots() >> pyplot.plot(x, y, marker=’o’, lw=0) >> pyplot.plot(xs, spl(xs))

Note the xknots are optimized by default. We can compare to the un-optimized knot locations, but (for historical reasons) we wont be able to set constraints.

>> sply = CLSQTensionSpline(x, y, t, tau=0.1, xy_constraints=[(-3,0)], optimize_knot_type=’y’) >> pyplot.plot(xs, sply(xs))

Parameters
  • x – np.ndarray. Data grid.

  • y – np.ndarray. Data values.

  • t – int or np.ndarray. Knot number or locations.

  • tau – float. Tension (higher is smoother).

  • w – np.ndarray. Data weights (usually ~1/std_devs)

  • xbounds – tuple. Minimum and maximum x of knot locations.

  • min_separation – float. Minumum separation between knot locations.

  • xy_constraints – list of tuples. Spline is constrained to have these values at these locations.

  • xyprime_constraints – Spline is constrained to have these derivatives as these locations.

  • optimize_knot_type – str. choose ‘xy’ to simultaneously optimize knot (x,y) values, ‘x’ to optimize x and y separately, and ‘y’ to simply use the prescribed knot locations.

get_knots()[source]

Get the x locations of knots

Returns

tuple. list of (x, y) tuples of knot location and value.

get_knot_locations()[source]

Get the x locations of knots

Returns

tuple. (x, y) knot location and value.

get_knot_values()[source]

Get the x locations of knots

Returns

tuple. (x, y) knot location and value.

class omfit_classes.utils_math.AutoUnivariateSpline(x, y, w=None, bbox=(None, None), k=3, ext=0, check_finite=False, max_interior_knots=None, verbose=False)[source]

Bases: scipy.interpolate.fitpack2.UnivariateSpline

Class of univariate spline with smoothing optimized for reduced chi squared assuming w=1/std_devs.

If no weight is supplied, it is taken to be 1/std_devs(y) and these should all be finite for this to make sense.

Examples

If we have some curve of data with meaningful deviations beyond the errorbars,

>> nx = 50 >> x = np.linspace(-3, 3, nx) >> y = np.exp(-x**2) + 0.3 * (np.random(nx) - 0.5) >> u = unumpy.uarray(y, 0.1 * np.ones(nx)) >> fig, ax = pyplot.subplots() >> uerrorbar(x, u, marker=’o’, label=’data’, ax=ax)

The scipy default spline uses smoothing s = len(x),

>> xs = np.linspace(min(x), max(x), 1000) >> spl = UnivariateSpline(x, y, w = 1/std_devs(u)) >> ax.plot(xs, spl(xs), label=’default spline’)

but the unceratinties spline optimizes the smoothing for reduced chi squared ~1,

>> uspl = AutoUnivariateSpline(x, u) >> ax.plot(xs, uspl(xs), label=’auto spline’) >> ax.legend()

The figure shows the uncertainties spline more accurately captures the meaningful deviations beyond the errorbars. In numbers,

>> default_smooth = spl._data[6] >> default_rchisq = spl.get_residual() / (nx - (len(spl.get_coeffs()) + len(spl.get_knots()) - 2)) >> print(‘Default smoothing is {:.1f} results in reduced chi squared {:.1f}’.format(default_smooth, default_rchisq)) >> print(‘Optimal smoothing of {:.1f} results in reduced chi squared {:.1f}’.format(uspl.get_smoothing_factor(), … uspl.get_reduced_chisqr()))

If the difference is not large, try running again (the deviations are random!).

To see how the optimizer arrived at the result, you can get the full evolution. Remember, it is targeting a reduced chi squared of unity.

>> s, f = uspl.get_evolution(norm=False) >> fig, ax = pyplot.subplots() >> ax.plot(s, f, marker=’o’, ls=’’) # all points tested >> ax.plot([uspl.get_smoothing_factor()], [uspl.get_reduced_chisqr() - 1], marker=’s’) # final value >> ax.set_xscale(‘log’) >> ax.set_xlabel(‘s’) >> ax.set_ylabel(‘Reduced chi squared - 1’)

Parameters
  • x – np.ndarray. Must be increasing

  • y – unumpy.uarray. Uncertainties array from uncertainties.unumpy.

  • w – np.ndarray. Optionally overrides uncertainties from y. Assumed to be 1/std_devs of gaussian errors.

  • bbox – (2,) array_like. 2-sequence specifying the boundary of the approximation interval.

  • k – int. Degree of the smoothing spline. Must be 1 <= k <= 5. Default is k=3, a cubic spline.

  • ext – int or str. Controls the extrapolation mode for elements not in the knot interval. Default 0. if ext=0 or ‘extrapolate’, return the extrapolated value. if ext=1 or ‘zeros’, return 0 if ext=2 or ‘raise’, raise a ValueError if ext=3 of ‘const’, return the boundary value.

  • check_finite – bool. Whether to check that the input arrays contain only finite numbers.

  • max_interior_knots – int. Maximum number of interior knots in a successful optimization. Use this to enforce over smoothing.

  • verbose – bool. Print warnings

get_reduced_chisqr()[source]
Returns

float. Reduced chi squared of the spline.

get_smoothing_factor()[source]
Returns

float. Smoothing factor using in UnivariateSpline.

get_evolution(norm=True)[source]
Parameters

norm – bool. Normalize s to the UnivariateSpline default npts.

Returns

tuple. sev, fev where sev is a record of all smoothing values tried in the optimization and fev is the corresponding reduced chi squared costs.

class omfit_classes.utils_math.CLSQUnivariateSpline(x, y, w=None, bbox=(None, None), k=3, ext=0, check_finite=False, t=None, optimize_knots=True, min_separation=0, xy_constraints=(), xyprime_constraints=(), maxiter=100, verbose=False)[source]

Bases: scipy.interpolate.fitpack2.LSQUnivariateSpline

Constrained least square univariate spline. This class sacrifices the generality of UnivariateSpline’s smoothing but enables the ability to constrain values and/or derivatives of the spline.

The constraints are used in an optimization of the knot locations, not fundamentally enforced in the underlying equations. Thus, constraints far from the natural spline will cause errors.

Examples

Using the same data from the LSQUnivariateSpline examples,

>> x = np.linspace(-2, 2, 50) >> nominal_values = np.exp(-x**2) >> std_devs = 0.1 * np.ones_like(nominal_values) >> y = nominal_values + np.random.normal(loc=0, scale=std_devs)

If we simply use this to fit a spline it will use AutoUnivariateSpline to optimize the knot locations.

>> spl = CLSQUnivariateSpline(x, y, w=1./std_devs) >> xs = np.linspace(-2, 2, 1000) >> pyplot.subplots() >> pyplot.errorbar(x, y, std_devs, marker=’o’, ls=’’) >> pyplot.plot(xs, spl(xs), label=’unconstrained’)

But the new part of this class is that is enables additional constraints on the spline. For example, we can request the spline have zero derivative at the left boundary.

>> splc = CLSQUnivariateSpline(x, y, w=1/std_devs, min_separation=0.1, xyprime_constraints=[(-2,0)]) >> pyplot.plot(xs, splc(xs), label=’constrained’) >> pyplot.legend()

Initialize a instance of a constrained least square univariate spline.

Parameters
  • x – (N,) array_like. Input dimension of data points. Must be increasing

  • y – (N,) array_like. Input dimension of data points

  • w – (N,) array_like. Weights for spline fitting. Must be positive. Default is equal weighting.

  • bbox – (2,) array_like. 2-sequence specifying the boundary of the approximation interval.

  • k – int. Degree of the smoothing spline. Must be 1 <= k <= 5. Default is k=3, a cubic spline.

  • ext – int or str. Controls the extrapolation mode for elements not in the knot interval. Default 0. if ext=0 or ‘extrapolate’, return the extrapolated value. if ext=1 or ‘zeros’, return 0 if ext=2 or ‘raise’, raise a ValueError if ext=3 of ‘const’, return the boundary value.

  • check_finite – bool. Whether to check that the input arrays contain only finite numbers.

  • t – (M,) array_like or integer. Interior knots of the spline in ascending order (t in LSQUnivariateSplien) or maximum number of interior knots (max_interior_knots in AutoUnivariateSpline).

  • optimize_knots – bool. Allow optimizer to change knot locations after initial guess from t or AutoUnivariateSpline.

  • min_separation – float. Minimum separation between knot locations if not explicitely specified by t.

  • xy_constraints – list of tuples. Spline is constrained to have these values at these locations.

  • xyprime_constraints – Spline is constrained to have these derivatives as these locations. x and y separately, and ‘y’ to simply use the prescribed knot locations.

  • maxiter – int. Maximum number of iterations for spline coeff optimization under constraints.

  • verbose – bool. Print warnings

get_reduced_chisqr()[source]
Returns

float. Reduced chi squared of the spline.

class omfit_classes.utils_math.MonteCarloSpline(x, y, w=None, bbox=(None, None), k=3, ext=0, check_finite=False, t=None, optimize_knots=True, min_separation=0, xy_constraints=(), xyprime_constraints=(), maxiter=200, n_trials=100)[source]

Bases: omfit_classes.utils_math.CLSQUnivariateSpline

Monte Carlo Uncertainty propagation through python spline fits.

The concept follows https://gist.github.com/thriveth/4680e3d3cd2cfe561a57 by Th/oger Rivera-Thorsen (thriveth), and essentially forms n_trials unique spline instances with randomly perturbed data assuming w=1/std_devs of gaussian noise.

Note, calling instances of this class returns unumpy.uarrays of Variable objects using the uncertainties package.

Examples

Using the same data from the LSQUnivariateSpline examples,

>> x = np.linspace(-2, 2, 50) >> nominal_values = np.exp(-x**2) >> std_devs = 0.1 * np.ones_like(y) >> y = nominal_values + np.random.normal(loc=0, scale=std_devs)

We can fit a monte carlo spline to get an errorbar on the interpolation.

>> spl = MonteCarloSpline(x, y, w=1/std_devs) >> xs = np.linspace(-2, 2, 1000) >> fig, ax = pyplot.subplots() >> eb = ax.errorbar(x, y, std_devs, marker=’o’, ls=’’) >> ub = uband(xs, spl(xs), label=’unconstrained’)

The individual Monte Carlo splines are CLSQUnivariateSpline instances, so we can set hard constraints as well.

>> splc = MonteCarloSpline(x, y, w=1/std_devs, min_separation=0.1, xyprime_constraints=[(-2,0)]) >> ub = uband(xs, splc(xs), label=’constrained’) >> ax.legend()

Note, this class is a child of the scipy.interpolate.LSQUnivariateSpline class, and has all of the standard spline class methods. Where appropriate, these methods dig into the montecarlo trials to return uncertainties. For example,

>> print(‘knots are fixed at {}’.format(splc.get_knots())) >> print(‘coeffs vary around {}’.format(splc.get_coeffs()))

Initialize a instance of a MonteCarlo constrained least square univariate spline.

Parameters
  • x – (N,) array_like. Input dimension of data points. Must be increasing

  • y – (N,) array_like. Input dimension of data points

  • w – (N,) array_like. Weights for spline fitting. Must be positive. Default is equal weighting.

  • bbox – (2,) array_like. 2-sequence specifying the boundary of the approximation interval.

  • k – int. Degree of the smoothing spline. Must be 1 <= k <= 5. Default is k=3, a cubic spline.

  • ext – int or str. Controls the extrapolation mode for elements not in the knot interval. Default 0. if ext=0 or ‘extrapolate’, return the extrapolated value. if ext=1 or ‘zeros’, return 0 if ext=2 or ‘raise’, raise a ValueError if ext=3 of ‘const’, return the boundary value.

  • check_finite – bool. Whether to check that the input arrays contain only finite numbers.

  • t – (M,) array_like or integer. Interior knots of the spline in ascending order or maximum number of interior knots.

  • optimize_knots – bool. Allow optimizer to change knot locations after initial guess from t or AutoUnivariateSpline.

  • min_separation – float. Minimum separation between knot locations if not explicitely specified by t.

  • xy_constraints – list of tuples. Spline is constrained to have these values at these locations.

  • xyprime_constraints – Spline is constrained to have these derivatives as these locations. x and y separately, and ‘y’ to simply use the prescribed knot locations.

  • maxiter – int. Maximum number of iterations for spline coeff optimization under constraints.

  • n_trials – int. Number of Monte Carlo spline iterations used to form errorbars.

antiderivative(n=1)[source]

Construct a new spline representing the antiderivative of this spline.

Parameters

n – int. Order of antiderivative to evaluate.

derivative(n=1)[source]

Construct a new spline representing the derivative of this spline.

Parameters

n – int. Order of antiderivative to evaluate.

derivatives(x)[source]

Return all derivatives of the spline at the point x.

get_coeffs()[source]

Return spline coefficients.

integral(a, b)[source]

Return definite integral of the spline between two given points.

roots()[source]

Return the zeros of the spline.

set_smoothing_factor(s)[source]

Continue spline computation with the given smoothing factor s and with the knots found at the last call.

omfit_classes.utils_math.contourPaths(x, y, Z, levels, remove_boundary_points=False, smooth_factor=1)[source]
Parameters
  • x – 1D x coordinate

  • y – 1D y coordinate

  • Z – 2D data

  • levels – levels to trace

  • remove_boundary_points – remove traces at the boundary

  • smooth_factor – smooth contours by cranking up grid resolution

Returns

list of segments

omfit_classes.utils_math.get_contour_generator(X, Y, Z, mask, corner_mask, nchunk)[source]
omfit_classes.utils_math.remove_adjacent_duplicates(x)[source]

Remove adjacent duplicate rows in a 2D array :param x: original array :return: array with adjacent duplicates removed

omfit_classes.utils_math.streamPaths(xm, ym, u, v, start_points, minlength=0, maxlength=10000000000.0, bounds_error=True)[source]

Integrate vector field and returns stream line

Params xm

uniformly spaced x grid

Params xm

uniformly spaced y grid

Params u

vector field in the x direction

Params v

vector field in the y direction

Params start_points

2D array of seed x,y coordinate points used to start integration

Params minlength

reject trajectories shorter than this length

Params maxlength

stop tracing trajectory when this length is reached

Parameters

bounds_error – raise error if trajectory starts outside of bounds

omfit_classes.utils_math.line_intersect(path1, path2, return_indices=False)[source]

intersection of two 2D paths

Parameters
  • path1 – array of (x,y) coordinates defining the first path

  • path2 – array of (x,y) coordinates defining the second path

  • return_indices – return indices of segments where intersection occurred

Returns

array of intersection points (x,y)

omfit_classes.utils_math.intersect3d_path_surface(path, surface)[source]

3D intersections of a path and a surface (list of triangles)

Parameters
  • path – list of points in 3D [[X0,Y0,Z0],[X1,Y1,X1],…,[Xn,Yn,Xn]]

  • surface – list of three points in 3D [[[X00,Y0,Z0],[X01,Y01,X01],[X02,Y02,X02]],…,[[Xn0,Yn0,Zn0],[Xn1,Yn1,Xn1],[Xn2,Yn2,Xn2]]]

Returns

list of intersections

omfit_classes.utils_math.intersect3d_path_triangle(path, triangle)[source]

3D intersections of a path and a triangle

Parameters
  • path – list of points in 3D [[X0,Y0,Z0],[X1,Y1,X1],…,[Xn,Yn,Xn]]

  • triangle – three points in 3D [[X0,Y0,Z0],[X1,Y1,X1],[X2,Y2,X2]]

Returns

list of intersections

omfit_classes.utils_math.intersect3d_line_triangle(line, triangle)[source]

3D intersection of a line and a triangle https://stackoverflow.com/questions/42740765/intersection-between-line-and-triangle-in-3d

Parameters
  • line – two points in 3D [[X0,Y0,Z0],[X1,Y1,X1]]

  • triangle – three points in 3D [[X0,Y0,Z0],[X1,Y1,X1],[X2,Y2,X2]]

Returns

None if no intersection is found, or 3D point of intersection

omfit_classes.utils_math.get_s_on_wall(rp, zp, rlim, zlim, slim)[source]

Given R,Z of a point p and curve lim and parameter slim counting distance along the curve, find s at point p.

Primarily intended for mapping back from R,Z to s. Simple interpolation doesn’t work because the rlim, zlim arrays do not monotonically increase.

If the point is not on the curve, return s coordinate at point closest to (rp, zp).

Units are quoted in meters, but it should work if all the units are changed consistently.

Parameters
  • rp – float or 1D float array R coordinate(s) of point(s) of interest in meters

  • zp – float or 1D float array

  • rlim – 1D float array R values of points along the wall in meters

  • zlim – 1D float array Z values corresponding to rlim

  • slim – 1D float array s values corresponding to rlim

Returns

float or 1D float array s at the point(s) on the wall closest to (rp, zp)

omfit_classes.utils_math.point_to_line(px, py, x1, y1, x2, y2, return_closest_point=False)[source]

Calculate minimum distance from a set of points to a set of line segments.

The segments might be defined by consecutive vertices in a polyline. The closest point is closest to the SEGMENT, not the line extended to infinity.

The inputs can be arrays or scalars (that get forced into 1 element arrays). All arrays longer than 1 must have matching length. If (px, py) and (x1, x2, y1, y2) have the same length, the comparison is done for (px[0], py[0]) vs (x1[0], y1[0], x2[0], y2[0]). That is, line 0 (x1[0], …) is only compared to point 0.

All inputs should have matching units.

Parameters
  • px – 1D float array-like X coordinates of test points

  • py – 1D float array-like Y coordinates of test points

  • x1 – 1D float array-like X-coordinates of the first endpoint of each line segment.

  • x2 – 1D float array-like X-coordinates of the second endpoint of each line segment.

  • y1 – 1D float array-like Y-coordinates of the first endpoint of each line segment.

  • y2 – 1D float array-like Y-coordinates of the second endpoint of each line segment.

  • return_closest_point – bool Return the coordinates of the closest points instead of the distances.

Returns

array or tuple of arrays if return_closest_point = True:

tuple of two 1D float arrays with the X and Y coordinates of the closest point on each line segment to each point.

if return_closest_point = False:

1D float array giving the shortest distances between the points and the line segments.

omfit_classes.utils_math.point_in_poly(x, y, poly)[source]

Determine if a point is inside a given polygon or not. Polygon is a list of (x,y) pairs. This function returns True or False. The algorithm is called the “Ray Casting Method”. Source: http://geospatialpython.com/2011/01/point-in-polygon.html , retrieved 20160105 18:39 :param x, y: floats

Coordinates of the point to test

Parameters

poly – List of (x,y) pairs defining a polygon.

Returns

bool Flag indicating whether or not the point is within the polygon.

To test:

polygon = [(0,10),(10,10),(10,0),(0,0)] point_x = 5 point_y = 5 inside = point_in_poly(point_x, point_y, polygon) print(inside)

omfit_classes.utils_math.centroid(x, y)[source]

Calculate centroid of polygon

Parameters
  • x – x coordinates of the polygon

  • y – y coordinates of the polygon

Returns

tuple with x and y coordinates of centroid

omfit_classes.utils_math.usqrt(y)[source]

Handle uncertainties if needed

omfit_classes.utils_math.ulog(y)[source]

Handle uncertainties if needed

omfit_classes.utils_math.gaussian_filter1d(input, sigma, axis=- 1, order=0, output=None, mode='reflect', cval=0.0, truncate=4.0, causal=False)[source]

One-dimensional Gaussian filter. Parameters ———- %(input)s sigma : scalar

standard deviation for Gaussian kernel

%(axis)s order : int, optional

An order of 0 corresponds to convolution with a Gaussian kernel. A positive order corresponds to convolution with that derivative of a Gaussian.

%(output)s %(mode)s %(cval)s truncate : float, optional

Truncate the filter at this many standard deviations. Default is 4.0.

causalbool or sequence of bools

Remove all forward weightings.

gaussian_filter1d : np.ndarray Examples ——– >> from scipy.ndimage import gaussian_filter1d >> gaussian_filter1d([1.0, 2.0, 3.0, 4.0, 5.0], 1) np.array([ 1.42704095, 2.06782203, 3. , 3.93217797, 4.57295905]) >> gaussian_filter1d([1.0, 2.0, 3.0, 4.0, 5.0], 4) np.array([ 2.91948343, 2.95023502, 3. , 3.04976498, 3.08051657]) >> from matplotlib import pyplot >> np.random.seed(280490) >> x = np.random.randn(101).cumsum() >> y3 = gaussian_filter1d(x, 3) >> y6 = gaussian_filter1d(x, 6) >> pyplot.plot(x, ‘k’, label=’original data’) >> pyplot.plot(y3, ‘–’, label=’filtered, sigma=3’) >> pyplot.plot(y6, ‘:’, label=’filtered, sigma=6’) >> pyplot.legend() >> pyplot.grid() >> pyplot.show()

Causality example, >> x = np.arange(0,100) >> y = 1.* (x > 40) * (x < 60) >> fig, ax = pyplot.subplots() >> ax.plot(x, y, ‘x-‘) >> ax.plot(x, gaussian_filter1d(y,3.)) >> ax.plot(x, gaussian_filter1d(y,3., causal=True)) >> ax.set_ylim(0,1.5)

omfit_classes.utils_math.gaussian_filter(input, sigma, order=0, output=None, mode='reflect', cval=0.0, truncate=4.0, causal=False)[source]

Multidimensional Gaussian filter. Parameters ———- %(input)s sigma : scalar or sequence of scalars

Standard deviation for Gaussian kernel. The standard deviations of the Gaussian filter are given for each axis as a sequence, or as a single number, in which case it is equal for all axes.

orderint or sequence of ints, optional

The order of the filter along each axis is given as a sequence of integers, or as a single number. An order of 0 corresponds to convolution with a Gaussian kernel. A positive order corresponds to convolution with that derivative of a Gaussian.

%(output)s %(mode_multiple)s %(cval)s truncate : float

Truncate the filter at this many standard deviations. Default is 4.0.

causalbool or sequence of bools

Remove all forward weightings.

gaussian_filternp.ndarray

Returned array of same shape as input.

The multidimensional filter is implemented as a sequence of one-dimensional convolution filters. The intermediate arrays are stored in the same data type as the output. Therefore, for output types with a limited precision, the results may be imprecise because intermediate results may be stored with insufficient precision. Examples ——– >> from scipy.ndimage import gaussian_filter >> a = np.arange(50, step=2).reshape((5,5)) >> a np.array([[ 0, 2, 4, 6, 8],

[10, 12, 14, 16, 18], [20, 22, 24, 26, 28], [30, 32, 34, 36, 38], [40, 42, 44, 46, 48]])

>> gaussian_filter(a, sigma=1) np.array([[ 4, 6, 8, 9, 11],

[10, 12, 14, 15, 17], [20, 22, 24, 25, 27], [29, 31, 33, 34, 36], [35, 37, 39, 40, 42]])

>> from scipy import misc >> from matplotlib import pyplot >> fig = pyplot.figure() >> pyplot.gray() # show the filtered result in grayscale >> ax1 = fig.add_subplot(121) # left side >> ax2 = fig.add_subplot(122) # right side >> ascent = misc.ascent() >> result = gaussian_filter(ascent, sigma=5) >> ax1.imshow(ascent) >> ax2.imshow(result) >> pyplot.show()

Here is a nice little demo of the added OMFIT causal feature,

>> nn = 24 >> a = np.zeros((nn, nn)) >> a[int(nn//2),int(nn//2)] = 1 >> fig, axs = pyplot.subplots(2, 2) >> ax = axs[0, 0] >> ax.imshow(scipy.ndimage.gaussian_filter(a, 3, mode=’nearest’), origin=’lower’) >> ax.set_title(‘scipy version’) >> ax = axs[0, 1] >> ax.imshow(gaussian_filter(a, 3, mode=’nearest’), origin=’lower’) >> ax.set_title(‘OMFIT version’) >> ax = axs[1, 0] >> ax.imshow(gaussian_filter(a, 3, causal=True, mode=’nearest’), origin=’lower’) >> ax.set_title(‘causal True’) >> ax = axs[1, 1] >> ax.imshow(gaussian_filter(a,3, causal=(True, False), mode=’nearest’), origin=’lower’) >> ax.set_title(‘causal True, False’)

omfit_classes.utils_math.bicoherence(s1, s2=None, fs=1.0, nperseg=None, noverlap=None, **kwargs)[source]

Compute the bicoherence between two signals of the same lengths s1 and s2 using the function scipy.signal.spectrogram.

Sourced from https://stackoverflow.com/questions/4194554/function-for-computing-bicoherence

Parameters
  • s1 – ndarray. Time series of measurement values

  • s2 – ndarray. Time series of measurement values (default of None uses s1)

  • fs – Sampling frequency of the x time series. Defaults to 1.0.

  • nperseg – int. Length of each segment. Defaults to None, but if window is str or tuple, is set to 256, and if window is array_like, is set to the length of the window.

  • noverlap – int. Number of points to overlap between segments. If None, noverlap = nperseg // 8. Defaults to None.

  • kwargs – All additional key word arguments are passed to signal.spectrogram

Return f, bicoherence

array of frequencies and matrix of bicoherence at those frequencies

utils_fusion

omfit_classes.utils_fusion.gyroradius(T, Bt, Z, m)[source]

This function calculates plasma gyroradius

Parameters
  • T – Ion temperature [eV]

  • Bt – Magnetic field [T]

  • Z – charge

  • m – mass [AMU]

Returns

gyroradius [m]

omfit_classes.utils_fusion.banana_width(T, Bt, Z, m, epsilon, q)[source]

This function estimates the banana orbit width

Parameters
  • T – Temperature [eV]

  • Bt – Magnetic field [T]

  • Z – Charge

  • m – Mass [AMU]

  • epsilon – Inverse aspect ratio

  • q – Safety factor

Returns

Estimate of banana orbit width [m]

omfit_classes.utils_fusion.lambda_debye(**keyw)[source]

Debye length [m]

Needs ne [m^-3], te [eV]

Formula: \(\sqrt{\frac{\varepsilon_0 T_e}{q_e n_e}}\)

omfit_classes.utils_fusion.bmin_classical(**keyw)[source]

Classical distance of minimum approach [m]

Needs zeff [-], te [eV]

omfit_classes.utils_fusion.bmin_qm(**keyw)[source]

Quantum mechanical distance of minimum approach [m]

Needs te [eV]

omfit_classes.utils_fusion.ln_Lambda(**keyw)[source]

Coulomb logarithm [-]

Needs te [eV], zeff [-], ne [m^-3]

omfit_classes.utils_fusion.nu_e(**keyw)[source]

Electron collision frequency [1/s] Eq. (A9) in UW CPTC 09-6R

Needs te [eV], zeff [-], ne [m^-3]

omfit_classes.utils_fusion.vTe(**keyw)[source]

Electron thermal velocity [m/s]

Needs te [eV]

omfit_classes.utils_fusion.lambda_e(**keyw)[source]

Electron mean free path [m]

Needs te [eV], zeff [-], ne [m^-3]

omfit_classes.utils_fusion.omega_transit_e(**keyw)[source]

Electron transit frequency [1/s]

Needs q [-], R0 [m], te [eV]

omfit_classes.utils_fusion.epsilon(**keyw)[source]

Inverse aspect ratio [-]

Needs (rho [m], R0 [m]) or (r_minor, R_major)

omfit_classes.utils_fusion.f_c(**keyw)[source]

Flow-weighted fraction of circulating particles

Needs epsilon inputs

omfit_classes.utils_fusion.f_t(**keyw)[source]

Flow-weighted fraction of trapped particles

Needs epsilon inputs

omfit_classes.utils_fusion.fsa_B2_over_fsa_b_dot_B(**keyw)[source]

Approximation of geometric factor <B_0^2>/<(b.grad(B_0))^2>. [m^-2]

Needs R0 [m], q [-], epsilon inputs

omfit_classes.utils_fusion.nu_star_e(**keyw)[source]

Electron collisionality parameter [-]

Needs R0 [m], q [-], te [eV], zeff [-], ne [m^-3], epsilon inputs

omfit_classes.utils_fusion.omega_transit_e_tau(**keyw)[source]

Dimensionless omega_transit [-]

Needs te [eV], zeff [-], ne [m^-3], q [-], R0 [m]

omfit_classes.utils_fusion.M_visc_e(**keyw)[source]

Dimensionless electron viscosity matrix M_e [-] 2 x 2 x len(ne)

Needs te [eV], zeff [-], ne [m^-3], q [-], R0 [m]

omfit_classes.utils_fusion.N_fric_e(**keyw)[source]

Dimensionless electron friction matrix N_e [-] 2 x 2 x len(zeff)

Needs zeff [-]

omfit_classes.utils_fusion.inverse_N(**keyw)[source]

Inverse of the electron friction matrix [-]

Needs zeff

omfit_classes.utils_fusion.Spitzer_factor(**keyw)[source]

Spitzer resistivity factor [-]

Needs zeff [-]

omfit_classes.utils_fusion.inverse_NM(**keyw)[source]

Inverse of the sum N_fric_e + M_fric_e [-]

Needs te [eV], zeff [-], ne [m^-3], q [-], R0 [m]

omfit_classes.utils_fusion.resistive_factor(**keyw)[source]

Resistive factor [-]

Needs te [eV], zeff [-], ne [m^-3], q [-], R0 [m]

omfit_classes.utils_fusion.eta_0(**keyw)[source]

Reference resistivity [ohm-m]

Needs ne [m^-3], te [eV], zeff [-]

omfit_classes.utils_fusion.eta_par_neo(**keyw)[source]

Parallel neoclassical resistivity [ohm-m]

Needs te [eV], zeff [-], ne [m^-3], q [-], R0 [m]

omfit_classes.utils_fusion.q_rationals(x, q, nlist, mlist=None, doPlot=False)[source]

Evaluate rational flux surfaces (m/n) give a q profile

Parameters
  • x – x axis over which q profile is defined

  • q – q profile

  • nlist – list of n mode number

  • mlist – list of m mode number (if None returns all possible (m/n) rationals)

  • doPlot – Plot (either True or matplotlib axes)

Returns

dictionary with (m,n) keys and x intersections as values

omfit_classes.utils_fusion.tokamak(tokamak, output_style='OMFIT', allow_not_recognized=True, translation_dict=None)[source]

function that sanitizes user input tokamak in a format that is recognized by other codes

Parameters
  • tokamak – user string of the tokamak

  • output_style – format of the tokamak used for the output one of [‘OMFIT’,’TRANSP’,’GPEC’]

  • allow_not_recognized – allow a user to enter a tokamak which is not recognized

  • translation_dict – dictionary used for further translation. This is handy for example in situations where we want to get the same string back independently of whether it is a older tokamak or its upgraded version. For example tokamak(‘NSTX-U’, translation_dict={‘NSTXU’:’NSTX’})

Returns

sanitized string

omfit_classes.utils_fusion.is_device(devA, devB)[source]

Compare strings or lists of strings for equivalence in tokamak name

Parameters
  • devA – A string or list of strings

  • devB – A string or list of strings

Returns

True or False

Example: is_device(‘D3D’,[‘DIII-D’,’MAST’])

omfit_classes.utils_fusion.device_specs(device='DIII-D')[source]

Returns a dictionary of information that is specific to a particular tokamak

Parameters

device – The name of a tokamak. It will be evaluated with the tokamak() function so variation in spelling and capitalization will be tolerated. This function has some additional translation capabilities for associating MDSplus servers with tokamaks; for example, “EAST_US”, which is the entry used to access the eastdata.gat.com MDSplus mirror, will be translated to EAST.

Returns

A dictionary with as many static device measurements as are known

omfit_classes.utils_fusion.nclass_conductivity(psi_N=None, Te=None, ne=None, Ti=None, q=None, eps=None, R=None, fT=None, volume=None, Zeff=None, nis=None, Zis=None, Zdom=None, version='osborne', return_info_pack=False, plot_slice=None, sigma_compare=None, sigma_compare_label='Input for comparison', spitzer_compare=None, spitzer_compare_label='Spitzer input for comparison', charge_number_to_use_in_ion_collisionality='Koh', charge_number_to_use_in_ion_lnLambda='Zavg')[source]

Calculation of neoclassical conductivity

See: O. Sauter, et al., Phys. Plasmas 6, 2834 (1999); doi:10.1063/1.873240 Neoclassical conductivity appears in equations: 5, 7, 13a, and unnumbered equations in the conclusion

Other references:

S Koh, et al., Phys. Plasmas 19, 072505 (2012); doi: 10.1063/1.4736953

for dealing with ion charge number when there are multiple species

T Osborne, “efit.py Kinetic EFIT Method”, private communication (2013);

this is a word file with a description of equations used to form the current profile constraint

O Sauter, et al., Phys. Plasmas 9, 5140 (2002); doi:10.1063/1.1517052

this has corrections for Sauter 1999 but it also has a note on what Z to use in which equations; it argues that ion equations should use the charge number of the main ions for Z instead of the ion effective charge number from Koh 2012

Sauter website

Accurate neoclassical resistivity, bootstrap current and other transport coefficients (Fortran 90 subroutines and matlab functions): has some code that was used to check the calculations in this script (BScoeff.m, nustar.m, sigmaneo.m, jdotB_BS.m)

GACODE NEO source

Calculations from NEO (E. A. Belli)

Update August 2021add new set of analytical formulae for the computation of the neoclassical condactivity from

A.Redl, et al., Phys. Plasma 28, 022502 (2021) https://doi.org/10.1063/5.0012664 and all relevant variables are mentioned as neo_2021

This function was initially written as part of the Kolemen Group Automatic Kinetic EFIT Project (auto_kEFIT).

Parameters
  • psi_N – position basis for all profiles, required only for plotting (normalized poloidal magnetic flux)

  • Te – electron temperature in eV as a function of time and position (time should be first axis, then position)

  • ne – electron density in m^-3 (vs. time and psi)

  • Ti – ion temperature in eV

  • Zeff – [optional if nis and Zis are provided] effective charge state of ions = sum_j(n_j (Z_j)^2)/sum_j(n_j Z_j) where j is ion species (this is probably a sum over deuterium and carbon)

  • nis – [optional if Zeff is provided] list of ion densities in m^-3

  • Zis – [optional if Zeff is provided] ion charge states (list of scalars)

  • Zdom – [might be optional] specify the charge number of the dominant ion species. Defaults to the one with the highest total number of particles (volume integral of ni). If using the estimation method where only Zeff is provided, then Zdom is assumed to be 1 if not provided.

  • q – safety factor

  • eps – inverse aspect ratio

  • R – major radius of the geometric center of each flux surface

  • fT – trapped particles fraction

  • volume – [not needed if Zdom is provided, unlikely to cause trouble if not provided even when “needed”] volume enclosed by each flux surface, used to identify dominant ion species if dominant ion species is not defined explicitly by doing a volume integral (so we need this so we can get dV/dpsiN). If volume is needed but not provided, it will be crudely estimated. Errors in the volume integral are very unlikely to affect the selection of the dominant ion species (it would have to be a close call to begin with), so it is not critical that volume be provided with high quality, if at all.

  • return_info_pack – Boolean: If true, returns a dictionary full of many intermediate variables from this calculation instead of just conductivity

  • plot_slice – Set to the index of the timeslice to plot in order to plot one timeslice of the calculation, including input profiles and intermediate quantities. Set to None for no plot (default)

  • sigma_compare – provide a conductivity profile for comparison in Ohm^-1 m^-1

  • sigma_compare_label – plot label to use with sigma_compare

  • spitzer_compare – provide another conductivity profile for comparison (so you can compare neoclassical and spitzer) (Ohm^1 m^1)

  • spitzer_compare_label – plot label to use with spitzer_compare

  • charge_number_to_use_in_ion_collisionality

    instruction for replacing single ion species charge number Z in nuistar equation when going to multi-ion species plasma. Options are: [‘Koh’, ‘Dominant’, ‘Zeff’, ‘Zavg’, ‘Koh_avg’]

    Dominant uses charge number of ion species which contributed the most electrons (recommended by Sauter 2002) Koh uses expression from Koh 2012 page 072505-11 evaluated for dominant ion species (recommended by Koh 2012) Koh_avg evaluates Koh for all ion species and then averages over species Zeff uses Z_eff (No paper recommends using this but it appears to be used by ONETWO) Zavg uses ne/sum(ni) (Koh 2012 recommends using this except for collision frequency)

    Use Koh for best agreement with TRANSP

  • charge_number_to_use_in_ion_lnLambda

    instruction for replacing single ion species charge number Z in lnLambda equation when going to multi-ion species plasma. Options are: [‘Koh’, ‘Dominant’, ‘Zeff’, ‘Zavg’, ‘Koh_avg’]

    Use Koh for best agreement with TRANSP

Returns

neoclassical conductivity in (Ohm^-1 m^-1) as a function of time and input psi_N (after interpolation/extrapolation). If output with “return_info_pack”, the return is a dictionary containing several intermediate variables which are used in the calculation (collisionality, lnLambda, etc.)

omfit_classes.utils_fusion.nclass_conductivity_from_gfile(psi_N=None, Te=None, ne=None, Ti=None, gEQDSK=None, Zeff=None, nis=None, Zis=None, Zdom=None, return_info_pack=False, plot_slice=None, charge_number_to_use_in_ion_collisionality='Koh', charge_number_to_use_in_ion_lnLambda='Zavg')[source]

WRAPPER FOR nclass_conductivity THAT EXTRACTS GFILE STUFF AND INTERPOLATES FOR YOU Calculation of neoclassical conductivity See: O. Sauter, et al., Phys. Plasmas 6, 2834 (1999); doi:10.1063/1.873240 Neoclassical conductivity appears in equations: 5, 7, 13a, and unnumbered equations in the conclusion

This function was initially written as part of the Kolemen Group Automatic Kinetic EFIT Project (auto_kEFIT).

Parameters
  • psi_N – position basis for all non-gfile profiles

  • Te – electron temperature in eV as a function of time and position (time should be first axis, then position)

  • ne – electron density in m^-3 (vs. time and psi)

  • Ti – ion temperature in eV

  • Zeff – [optional if nis and Zis are provided] effective charge state of ions = sum_j(n_j (Z_j)^2)/sum_j(n_j Z_j) where j is ion species (this is probably a sum over deuterium and carbon)

  • nis – [optional if Zeff is provided] list of ion densities in m^-3

  • Zis – [optional if Zeff is provided] ion charge states (list of scalars)

  • Zdom – [might be optional] specify the charge number of the dominant ion species. Defaults to the one with the highest total number of particles (volume integral of ni). If using the estimation method where only Zeff is provided, then Zdom is assumed to be 1 if not provided.

  • gEQDSK – an OMFITcollection of g-files or a single g-file as an instance of OMFITgeqdsk

  • return_info_pack – Boolean: If true, returns a dictionary full of many intermediate variables from this calculation instead of just conductivity

  • plot_slice – Set to the index of the timeslice to plot in order to plot one timeslice of the calculation, including input profiles and intermediate quantities. Set to None for no plot (default)

  • charge_number_to_use_in_ion_collisionality

    instruction for replacing single ion species charge number Z in nuistar equation when going to multi-ion species plasma. Options are: [‘Koh’, ‘Dominant’, ‘Zeff’, ‘Zavg’, ‘Koh_avg’]

    Dominant uses charge number of ion species which contributed the most electrons (recommended by Sauter 2002) Koh uses expression from Koh 2012 page 072505-11 evaluated for dominant ion species (recommended by Koh 2012) Koh_avg evaluates Koh for all ion species and then averages over species Zeff uses Z_eff (No paper recommends using this but it appears to be used by ONETWO) Zavg uses ne/sum(ni) (Koh 2012 recommends using this except for collision frequency)

    Use Koh for best agreement with TRANSP

  • charge_number_to_use_in_ion_lnLambda

    instruction for replacing single ion species charge number Z in lnLambda equation when going to multi-ion species plasma. Options are: [‘Koh’, ‘Dominant’, ‘Zeff’, ‘Zavg’, ‘Koh_avg’]

    Use Koh for best agreement with TRANSP

Returns

neoclassical conductivity in (Ohm^-1 m^-1) as a function of time and input psi_N (after interpolation/extrapolation). If output with “return_info_pack”, the return is a dictionary containing several intermediate variables which are used in the calculation (collisionality, lnLambda, etc.)

omfit_classes.utils_fusion.sauter_bootstrap(psi_N=None, Te=None, Ti=None, ne=None, p=None, nis=None, Zis=None, Zeff=None, gEQDSKs=None, R0=None, device='DIII-D', psi_N_efit=None, psiraw=None, R=None, eps=None, q=None, fT=None, I_psi=None, nt=None, version='osborne', debug_plots=False, return_units=False, return_package=False, charge_number_to_use_in_ion_collisionality='Koh', charge_number_to_use_in_ion_lnLambda='Zavg', dT_e_dpsi=None, dT_i_dpsi=None, dn_e_dpsi=None, dnis_dpsi=None)[source]

Sauter’s formula for bootstrap current

See: O. Sauter, et al., Phys. Plasmas 6, 2834 (1999); doi:10.1063/1.873240

Other references:

S Koh, et al., Phys. Plasmas 19, 072505 (2012); doi: 10.1063/1.4736953

for dealing with ion charge number when there are multiple species

T Osborne, “efit.py Kinetic EFIT Method”, private communication (2013);

this is a word file with a description of equations used to form the current profile constraint

O Sauter, et al., Phys. Plasmas 9, 5140 (2002); doi:10.1063/1.1517052

this has corrections for Sauter 1999 but it also has a note on what Z to use in which equations; it argues that ion equations should use the charge number of the main ions for Z instead of the ion effective charge number from Koh 2012

Sauter website

Accurate neoclassical resistivity, bootstrap current and other transport coefficients (Fortran 90 subroutines and matlab functions): has some code that was used to check the calculations in this script (BScoeff.m, nustar.m, sigmaneo.m, jdotB_BS.m)

GACODE NEO source

Calculations from NEO (E. A. Belli)

Y R Lin-Liu, et al., “Zoo of j’s”, DIII-D physics memo (1996);

got hardcopy from Sterling Smith & photocopied

Update August 2021add new set of analytical formulae for the computation of the neoclassical condactivity from

A.Redl, et al., Phys. Plasma 28, 022502 (2021) https://doi.org/10.1063/5.0012664 and all relevant variables are mentioned as neo_2021

This function was initially written as part of the Kolemen Group Automatic Kinetic EFIT Project (auto_kEFIT).

Parameters
  • psi_N – normalized poloidal magnetic flux as a position coordinate for input profiles Te, Ti, ne, etc.

  • Te – electron temperature in eV, first dimension: time, second dimension: psi_N

  • Ti – ion temperature in eV, 2D with dimensions matching Te (time first)

  • ne – electron density in m^-3, dimensions matching Te

  • p – total pressure in Pa, dimensions matching Te

  • Zeff – [optional if nis and Zis are provided] effective charge state of ions = sum_j(n_j (Z_j)^2)/sum_j(n_j Z_j) where j is ion species (this is probably a sum over deuterium and carbon)

  • nis – [optional if Zeff is provided] list of ion densities in m^-3

  • Zis – [optional if Zeff is provided] ion charge states (list of scalars)

  • R0 – [optional if device is provided and recognized] The geometric center of the tokamak’s vacuum vessel in m. (For DIII-D, this is 1.6955 m (Osborne, Lin-Liu))

  • device – [used only if R0 is not provided] The name of a tokamak for the purpose of looking up R0

  • gEQDSKs

    a collection of g-files from which many parameters will be derived. The following quantities are taken from g-files if ANY of the required ones are missing:

    param psi_N_efit

    [optional] psi_N for the EFIT quantities if different from psi_N for kinetic profiles

    param nt

    [optional] number of time slices in equilibrium data (if you don’t want to tell us, we will measure the shape of the array)

    param psiraw

    poloidal flux before normalization (psi_N is derived from this).

    param R

    major radius coordinate R of each flux surface’s geometric center in m

    param q

    safety factor (inverse rotation transform)

    param eps

    inverse aspect ratio of each flux surface: a/R

    param fT

    trapped particle fraction on each flux surface

    param I_psi

    also known as F = R*Bt, averaged over each flux surface

  • version

    which quantity to return: ‘jB_fsa’ is the object directly from Sauter’s paper: 2nd term on RHS of last equation in conclusion. ‘osborne’ is jB_fsa w/ |I_psi| replaced by R0. Motivated by memo from T. Osborne about kinetic EFITs ‘jboot1’ is 2nd in 1st equation of conclusion of Sauter 1999 w/ correction from Sauter 2002 erratum. ‘jboot1BROKEN’ is jboot1 without correction from Sauter 2002 (THIS IS FOR TESTING/COMPARISON ONLY) ‘neo_2021’ a new set of analytical coefficients from A.Redl, et al. (the new set of analytical formulae consists of the same analytical structure as the ‘jboot1’ and ‘jboot1BROKEN’ )

    You should use jboot1 if you want <J.B> You should use osborne if you want J *** Put this into current_to_efit_form() to make an EFIT You should use jboot1 or jB_fsa to compare to Sauter’s paper, equations 1 and 2 of the conclusion You should use jboot1BROKEN to compare to Sauter 1999 without the 2002 correction

  • debug_plots – plot internal quantities for debugging

  • return_units – If False: returns just the current profiles in one 2D array. If True: returns a 3 element tuple containing the current profiles, a plain string containing the units, and a formatted string containing the units

  • return_package – instead of just a current profile, return a dictionary containing the current profile as well as other information

  • charge_number_to_use_in_ion_collisionality

    instruction for replacing single ion species charge number Z in nuistar equation when going to multi-ion species plasma. Options are: [‘Koh’, ‘Dominant’, ‘Zeff’, ‘Zavg’, ‘Koh_avg’]

    Dominant uses charge number of ion species which contributed the most electrons (recommended by Sauter 2002) Koh uses expression from Koh 2012 page 072505-11 evaluated for dominant ion species (recommended by Koh 2012) Koh_avg evaluates Koh for all ion species and then averages over species Zeff uses Z_eff (No paper recommends using this but it appears to be used by ONETWO) Zavg uses ne/sum(ni) (Koh 2012 recommends using this except for collision frequency)

    Use Koh for best agreement with TRANSP Use Zavg for best agreement with recommendations by Koh 2012

  • charge_number_to_use_in_ion_lnLambda

    instruction for replacing single ion species charge number Z in lnLambda equation when going to multi-ion species plasma. Options are: [‘Koh’, ‘Dominant’, ‘Zeff’, ‘Zavg’, ‘Koh_avg’]

    Use Koh for best agreement with TRANSP Use Koh for best agreement with recommendations by Koh 2012

Return jB

flux surface averaged j_bootstrap * B with some modifications according to which version you select

Return units

[only if return_units==True] a string with units like “A/m^2”

Return units_format

[only if return_units==True] a TeX formatted string with units like “$A/m^2$” (can be included in plot labels directly)

This is first equation in the conclusion of Sauter 1999 (which is equation 5 with stuff plugged in) (with correction from the erratum (Sauter 2002):

<j_par * B> = sigma_neo * <E_par * B> - I(psi) p_e *
                  [L_31 * p/p_e * d_psi(ln(p)) + L_32 * d_psi(ln(T_e))
                  + L_34 * alpha * (1-R_pe)/R_pe * d_psi(ln(T_i))]

The second equation in the conclusion is nicer looking:

<j_par * B> = sigma_new * <E_par * B> - I(psi) p *
               [L_31 d_psi(ln(n_e)) + R_pe * (L_31 + L_32) * d_psi(ln(T_e))
               + (1-R_pe)*(1+alpha*L_34/L_31)*L_31 * d_psi(ln(T_i))]

In both equations, the first term is ohmic current and the second term is bootstrap current. The second equation uses some approximations which make the result much smoother. The first equation had an error in the original Sauter 1999 paper that was corrected in the 2002 erratum.

  • < > denotes a flux surface average (mentioned on page 2835)

  • j_par is the parallel current (parallel to the magnetic field B) (this is what we’re trying to find)

  • B is the total magnetic field

  • sigma_neo is the neoclassical conductivity given by equation 7 on page 2835 or equation 13 on page 2837 (this is mentioned as neoclassical resistivity on page 2836, but the form of the equation clearly shows that it is conductivity, the reciprocal of resistivity. Also the caption of figure 2 confirms that conductivity is what is meant.)

  • E_par is the parallel electric field

  • I(psi) = R * B_phi (page 2835)

  • p_e is the electron pressure

  • L_31, L_32, and L_34 are given by equations 8, 9, and 10 respectively (eqns on page 2835). Also they are given again by eqns 14-16 on pages 2837-2838

  • p is the total pressure

  • d_psi() is the derivative with respect to psi (not psi_N)

  • T_e is the electron temperature

  • alpha is given by equation 11 on page 2835 or by eqn 17a on page 2838

  • T_i is the ion temperature

  • R_pe = p_e/p

  • f_T the trapped particle fraction appears in many equations and is given by equation 12 on page 2835 but also in equation 18b with nu_i* in equation 18c

useful quantities are found in equation 18

omfit_classes.utils_fusion.current_to_efit_form(r0, inv_r, cross_sec, total_current, x)[source]

Conversion of current density to EFIT constraint format. Adapted from currentConstraint.py by O. Meneghini

Parameters
  • r0 – major radius of the geometric center of the vacuum vessel (1.6955 m for DIII-D) (scalar)

  • inv_r – flux surface average (1/R); units should be reciprocal of r0 (function of position or function of position and time)

  • cross_sec – cross sectional area of the plasma in m^2 (scalar or function of time

  • total_current – total plasma current in A (scalar or function of time)

  • x – input current density to be converted in A/m^2 (function of position or function of position and time)

Returns

x normalized to EFIT format (function of position or function of position and time)

omfit_classes.utils_fusion.estimate_ohmic_current_profile(cx_area, sigma, itot, jbs=None, ibs=None, jdriven=None, idriven=None)[source]

Estimate the profile of ohmic current using total current, the profile of bootstrap and driven current, and neoclassical conductivity. The total Ohmic current profile is calculated by integrating bootstrap and driven current and subtracting this from the total current. The Ohmic current profile is assigned assuming flat loop voltage and the total is scaled to match the estimated total Ohmic current.

All inputs should be on the same coordinate system with the same dimensions, except itot, ibs, and idriven should lack the position axis. If inputs have more than one dimension, position should be along the axis with index = 1 (the second dimension).

This function was initially written as part of the Kolemen Group Automatic Kinetic EFIT Project (auto_kEFIT).

Parameters
  • cx_area – Cross sectional area enclosed by each flux surface as a function of psin in m^2

  • sigma – Neoclassical conductivity in Ohm^-1 m^-1

  • itot – Total plasma current in A

  • jbs – [optional if ibs is provided] Bootstrap current density profile in A/m^2. If this comes from sauter_bootstrap(), the recommended version is ‘osborne’

  • ibs – [optional if jbs is provided] Total bootstrap current in A

  • jdriven – [optional if idriven is provided] Driven current density profile in A/m^2

  • idriven – [optional if jdriven is provided] Total driven current in A

Returns

Ohmic current profile as a function of psin in A/m^2

omfit_classes.utils_fusion.intrinsic_rotation(geo_a, geo_R, geo_Rx, R_mp, rho, I_p_sgn, Te, Ti, q, fc, B0, rho_ped, rho_sep=1.0, C_phi=1.0, d_c=1.0)[source]

Tim Stoltzfus-Dueck & Arash Ashourvan intrinsic rotation model

Parameters
  • geo_a – [m] plasma minor radius evaluated at the midplane

  • geo_R – [m] plasa major radius evaluated at the midplane

  • geo_Rx – [m] radial position of the X point

  • R_mp – [m] midplane radial coordinate from on-axis to the separatrix (LFS)

  • rho – normalised sqrt(toroidal flux)

  • I_p_sgn – sign of I_p to get the correct rotation direction, positive rotation is alway co-current

  • Te – [eV] electron temperature profile

  • Ti – [eV] ion temperature profile

  • q – safety factor/q profile

  • fc – Flux surface averaged passing particles fraction profile

  • B0 – [T] Magnetic field on axis

  • rho_ped – rho value at pedestal top

  • rho_sep – rho value at separatrix (/pedestal foot)

  • C_phi – constant that translates Te scale length to turbulence scale length. default value = 1.75, range: [1.0,2.0]/[0.5,4]

  • d_c – ballooning parameter for turbulence, where 0.0 is symmetric in ballooning angle, 2.0 is all at LFS. default value = 1.0, range: [0.0,2.0]

Return omega_int

[rad/s] intrinsic plasma rotation at pedestal top

omfit_classes.utils_fusion.standardize_gas_species_name(name, output_form='simple', on_fail='raise')[source]

Standardizes gas species names that could come from different/unknown sources.

These include common impurity gas molecules, so nitrogen and deuterium translate to N2 and D2, since they form diatomic molecules.

For example, N2, N$_2$, N_2, and nitrogen all mean the same thing. This function should accept any one of those and turn it into the format you request.

Intended to handle exotic capitaliation; e.g. corrects NE or ne into Ne.

Parameters
  • name – str The name of the species

  • output_form

    str simple: the shortest, simplest, unambiguous and correct form. E.g.: N2 latex: latex formatting for subscripts. E.g.: N$_2$ name: the name of the species. E.g.: nitrogen markup: symbol with punctuation like _ to indicate underscores, but no $. E.g.: N_2 atom: just the symbol for the atom without molecular information. E.g.: N

    This isn’t recommended as an output format; it’s here to simplify lookup in case someone indicates nitrogen gas by N instead of N2. Doesn’t work for mixed-element molecules.

  • on_fail

    str Behavior on lookup failure.

    raise: raise OMFITexception print: print an error message and return None quiet: quietly return None

Returns

str

omfit_classes.utils_fusion.lookup_d3d_gas_species(shot, valve=None, output_form=None)[source]

Retrieves information on the gas species loaded into DIII-D gas injector(s)

Parameters
  • shot – int Shot to look up. The species loaded into each valve can change from shot to shot.

  • valve – str [optional] Name of the gas valve or injector, like ‘A’ or ‘PFX1’. ‘GASA’ and ‘A’ are interchangeable. The valve name is not case sensitive. If provided, returns a string with the gas species for this valve, or raises OMFITexception on failure. If not provided, returns a dictionary of all the gas valves and species in the database.

  • name_format

    str [optional] Reformat the name of the gas species to conform to a preferred standard. Options (with examples) are:

    simple: N2 latex: N$_2$ name: nitrogen markup: N_2

    Set to None to get the string as written in the database.

Returns

str or dict Either the species of a specific valve, or a dictionary of all recorded valves and species. Either the return value itself or the dictionary values will be strings that are padded with spaces. If a valve is not in use, its gas species will be recorded as ‘None ‘.

omfit_classes.utils_fusion.east_gas_injection(shot=None, valve=None, species=None, main_ion_species='D', server='EAST', tank_temperature=293.15, verbose=True, plot_total=False, plot_flow=False, tsmo=0.5, axs=None)[source]

Calculates EAST gas injection amounts based on tank pressure

Whereas DIII-D gas commands in V are often close to proportional to gas flows (because they’re really inputs to an external flow controller built into the valve assembly), EAST gas commands are much more raw. An EAST command of 5 V is NOT roughly double a command of 2.5 V, for example. 2.5 V might not even open the valve, and there’d be no way to be sure from just the command. There is no flow sensor inside the EAST injectors like there is at DIII-D (the source of the gasb_cal measurement instead of the gascgasb command). Lastly, the EAST reservoirs behind the injectors are small and so the flow rate vs. voltage is probably not constant. The “tank” here isn’t really a huge tank of gas, it’s a length of pipe between the big tank of gas and the injector itself. So, letting out enough gas to control a shot really can affect it significantly.

To get an actual measurement, we can turn to the pressure in the gas tank that feeds the injector and watch its decrease over time to get a flow rate. This script doesn’t calculate a flow rate, it provides the integral of the flow rate. Since the data are noisy, some smoothing is recommended before differentiation to find the flow rate; we leave the choice of smoothing strength to the end user.

Limitations:

1. COOLING: We assumed constant temperature so far, but the gas tank clearly is cooled by letting gas out because the pressure very slowly rebounds after the shot, presumably from the tank warming up again. Remember the “tank” is actually just a little length of pipe behind the injector. To see how much error this causes, just look at how much tank pressure rebounds after seeding stops. The time history usually extends long after the shot. It seems like a small enough error that it hasn’t been corrected yet.

2. NEEDS POST-PROCESSING: most users are probably interested in flow rates and will have to take derivatives of the outputs of this function to get them, including smoothing to defeat noise in the signals.

3. INCOMPLETE INFO: the electrons_per_molecule_puffed dictionary only lists a few species so far.

Parameters
  • shot – int EAST shot number, like 85293

  • valve – str Like OU1. Also accepts names like OUPEV1 and VOUPEV1; PEV and leading V will be removed. There are different naming conventions in different contexts and this function tries to parse them all.

  • species

    str Species in the tank, like Ne. Diluted species are accepted; for example, “50% Ne” will be split at % and give 0.5 of the molecules as Ne

    and 0.5 of the molecules as the main ion species (probably D2).

  • main_ion_species – str Species of main ions of the plasma, like D

  • server – str MDSplus server to use as the source of the data. Intended to allow choice between ‘EAST’ and ‘EAST_US’.

  • tank_temperature – float Temperature of the gas in the tank in K

  • verbose – bool Print information like gas totals

  • plot_total – bool Plot total seeding amount vs time

  • plot_flow – bool Plot flow rate (derivative of total seeding) vs time

  • tsmo – float Smoothing timescale in seconds, used in plots only. Very important for viewing flow rate.

  • axs – Axes instance or 1D array of Axes instances Axes used with plot_total or plot_flow. If none are provided, a new figure will be created. Number of Axes must be >= plot_flow+plot_total.

Returns

tuple of 1D arrays and a str 1D array: time in s 1D array: impurity electrons added 1D array: fuel electrons added 1D array: total molecules added str: primary species added

omfit_classes.utils_fusion.calc_h98_d3d(shot, calc_tau=False, calc_ptot=False, pinj_only=False, estimated_signals=None, data=None)[source]

H98 confinement quality calcuation valid for DIII-D

There are some other H98 calculations for DIII-D:

h_thh98y2 is calcualted by the Transport code and is often regarded as pretty accurate, but it will turn itself off at the slightest hint of impurity seeding. If there’s noise in one of the valves (GASB had an issue in 2021), this will be interpreted as seeding and the calculation will disable itself. So, it was pretty much always disabled because of this in years with noisy gas signals. The database that drives the 98y2 scaling goes up to Zeff=3, so a tiny bit of impurity seeding is tolerable. Even if Zeff is too high, it might still be interesting to see what H98 would be if it could be trustworthy.

h98y2_aot is calculated from Automatic OneTwo runs and can overestimate H98. The AOT calculation is usually availble and doesn’t fail because of impurity seeding, but it’s probably less accurate than this calculation.

Parameters
  • shot – int

  • calc_tau – bool Calculate tauth instead of gathering tauth. Might be useful if the confinement code stops early and doesn’t write tauth.

  • calc_ptot – bool Calculate total power instead of gathering it. Useful if ptot isn’t written due to an error in another code

  • pinj_only – bool Approximate ptot by pinj, ignoring ohmic and ECH power. This might be okay.

  • estimated_signals – dict Data for estimating missing signals. Should contain a ‘time’ key whose value is a 1d float array. Should contain one additional key for each pointname that is being estimated, whose value is a 1d float array with length matching that of time.

  • data – dict [optional] Pass in an empty dict and data used in the calculation will be written to it. Intermediate quantities may be captured for inspection in this way.

Returns

tuple containing two 1D float arrays Time in ms H98 as a function of time (unitless)

omfit_classes.utils_fusion.available_efits_from_guess(scratch_area, device, shot, default_snap_list=None, format='{tree}', **kw)[source]

Attempts to form a reasonable list of likely EFITs based on guesses

Parameters
  • scratch_area – dict Scratch area for storing results to reduce repeat calls. Mainly included to match call sigure of available_efits_from_rdb(), since OMFITmdsValue already has caching.

  • device – str Device name

  • shot – int Shot number

  • default_snap_list – dict [optional] Default set of EFIT treenames. Newly discovered ones will be added to the list.

  • **kw

    quietly accepts and ignores other keywords for compatibility with other similar functions

Returns

(dict, str) Dictionary keys will be descriptions of the EFITs

Dictionary values will be the formatted identifiers. For now, the only supported format is just the treename. If lookup fails, the dictionary will be {‘’: ‘’} or will only contain default results, if any.

String will contain information about the discovered EFITs

omfit_classes.utils_fusion.available_EFITs(scratch_area, device, shot, allow_rdb=True, allow_mds=True, allow_guess=True, **kw)[source]

Attempts to look up a list of available EFITs using various sources

Parameters
  • scratch_area – dict Scratch area for storing results to reduce repeat calls.

  • device – str Device name

  • shot – int Shot number

  • allow_rdb – bool Allow connection to DIII-D RDB to gather EFIT information (only applicable for select devices) (First choice for supported devices)

  • allow_mds – bool Allow connection to MDSplus to gather EFIT information (only applicable to select devices) (First choice for non-RDB devices, second choice for devices that normally support RDB)

  • allow_guess – bool Allow guesses based on common patterns of EFIT availability on specific devices (Last resort, only if other options fail)

  • **kw

    Keywords passed to specific functions. Can include:

    param default_snap_list

    dict [optional] Default set of EFIT treenames. Newly discovered ones will be added to the list.

    param format

    str Instructions for formatting data to make the EFIT tag name. Provided for compatibility with available_efits_from_rdb() because the only option is ‘{tree}’.

Returns

(dict, str) Dictionary keys will be descriptions of the EFITs

Dictionary values will be the formatted identifiers. If lookup fails, the dictionary will be {‘’: ‘’} or will only contain default results, if any.

String will contain information about the discovered EFITs

utils_plot

omfit_classes.utils_plot.autofmt_sharexy(trim_xlabel=True, trim_ylabel=True, fig=None)[source]
omfit_classes.utils_plot.autofmt_sharey(trim_xlabel=True, fig=None, wspace=0)[source]

Prunes y-tick labels and y-axis labels from all but the first cols axes and moves cols (optionally) closer together.

Parameters
  • trim_xlabel – bool. prune right ytick label to prevent overlap.

  • fig – Figure. Defaults to current figure.

  • wspace – Horizontal spacing between axes.

omfit_classes.utils_plot.autofmt_sharex(trim_ylabel=True, fig=None, hspace=0)[source]

Prunes x-tick labels and x-axis labels from all but the last row axes and moves rows (optionally) closer together.

Parameters
  • trim_ylabel – bool. prune top ytick label to prevent overlap.

  • fig – Figure. Defaults to current figure.

  • hspace – Vertical spacing between axes.

omfit_classes.utils_plot.uerrorbar(x, y, ax=None, **kwargs)[source]

Given arguments y or x,y where x and/or y have uncertainties, feed the appropriate terms to matplotlib’s errorbar function.

If y or x is more than 1D, it is flattened along every dimension but the last.

Parameters
  • x – array of independent axis values

  • y – array of values with uncertainties, for which shaded error band is plotted

  • ax – The axes instance into which to plot (default: pyplot.gca())

  • **kwargs – Passed to ax.errorbar

Returns

list. A list of ErrorbarContainer objects containing the line, bars, and caps of each (x,y) along the last dimension.

class omfit_classes.utils_plot.Uband(line, bands)[source]

Bases: object

This class wraps the line and PollyCollection(s) associated with a banded errorbar plot for use in the uband function.

Parameters
  • line – Line2D A line of the x,y nominal values

  • bands – list of PolyCollections The fill_between and/or fill_betweenx PollyCollections spanning the std_devs of the x,y data

omfit_classes.utils_plot.uband(x, y, ax=None, fill_kwargs=None, **kwargs)[source]

Given arguments x,y where either or both have uncertainties, plot x,y using pyplot.plot of the nominal values and surround it with with a shaded error band using matplotlib’s fill_between and/or fill_betweenx.

If y or x is more than 1D, it is flattened along every dimension but the last.

Parameters
  • x – array of independent axis values

  • y – array of values with uncertainties, for which shaded error band is plotted

  • ax – The axes instance into which to plot (default: pyplot.gca())

  • fill_kwargs – dict. Passed to pyplot.fill_between

  • **kwargs – Passed to pyplot.plot

Returns

list. A list of Uband objects containing the line and bands of each (x,y) along the last dimension.

omfit_classes.utils_plot.hardcopy(fn, bbox_inches='tight', fig=None, **keyw)[source]
omfit_classes.utils_plot.set_fontsize(fig=None, fontsize='+0')[source]

For each text object of a figure fig, set the font size to fontsize

Parameters
  • fig – matplotlib.figure object

  • fontsize – can be an absolute number (e.g 10) or a relative number (-2 or +2)

Returns

None

omfit_classes.utils_plot.user_lines_cmap_cycle()[source]

return colormap chosen by the user for representation of lines

omfit_classes.utils_plot.user_image_cmap_cycle()[source]

return colormap chosen by the user for representation of images

omfit_classes.utils_plot.color_cycle(n=10, k=None, cmap_name=None)[source]

Utility function to conveniently return the color of an index in a colormap cycle

Parameters
  • n – number of uniformly spaced colors, or array defining the colors’ spacings

  • k – index of the color (if None an array of colors of length n will be returned)

  • cmap_name – name of the colormap

Returns

color of index k from colormap cmap_name made of n colors, or array of colors of length n if k is None Note: if n is an array, then the associated ScalarMappable object is also returned (e.g. for use in a colorbar)

omfit_classes.utils_plot.cycle_cmap(length=50, cmap=None, start=None, stop=None, ax=None)[source]

Set default color cycle of matplotlib based on colormap

Note that the default color cycle is not changed if ax parameter is set; only the axes’s color cycle will be changed

Parameters
  • length – The number of colors in the cycle

  • cmap – Name of a matplotlib colormap

  • start – Limit colormap to this range (0 < start < stop 1)

  • stop – Limit colormap to this range (0 < start < stop 1)

  • ax – If ax is not None, then change the axes’s color cycle instead of the default color cycle

Returns

color_cycle

omfit_classes.utils_plot.contrasting_color(line_or_color)[source]

Given a matplotlib color specification or a line2D instance or a list with a line2D instance as the first element, pick and return a color that will contrast well. More complicated than just inversion as inverting blue gives yellow, which doesn’t display well on a white background.

Parameters

line_or_color – matplotlib color spec, line2D instance, or list w/ line2D instance as the first element

Returns

4 element array RGBA color specification for a contrasting color

omfit_classes.utils_plot.associated_color(line_or_color)[source]

Given a matplotlib color specification or a line2D instance or a list with a line2D instance as the first element, pick and return a color that will look thematically linked to the first color, but still distinguishable.

Parameters

line_or_color – matplotlib color spec, line2D instance, or list w/ line2D instance as the first element

Returns

4 element array RGBA color specification for a related, similar (but distinguishable) color

omfit_classes.utils_plot.blur_image(im, n, ny=None)[source]

blurs the image by convolving with a gaussian kernel of typical size n. The optional keyword argument ny allows for a different size in the y direction.

omfit_classes.utils_plot.pcolor2(*args, fast=False, **kwargs)[source]

Plots 2D data as a patch collection. Differently from matplotlib.pyplot.pcolor the mesh is extended by one element so that the number of tiles equals the number of data points in the Z matrix. The X,Y grid does not have to be rectangular.

Parameters
  • *args – Z or X,Y,Z data to be plotted

  • fast – bool Use pcolorfast instead of pcolor. Speed improvements may be dramatic. However, pcolorfast is marked as experimental and may produce unexpected behavior.

  • **kwargs – these arguments are passed to matplotlib.pyplot.pclor

Returns

None

omfit_classes.utils_plot.image(*args, **kwargs)[source]

Plots 2D data as an image.

Much faster than pcolor/pcolor2(fast=False), but the data have to be on a rectangular X,Y grid

Parameters
  • *args – Z or X,Y,Z data to be plotted

  • **kwargs – these arguments are passed to pcolorfast

omfit_classes.utils_plot.meshgrid_expand(xdata, ydata)[source]

returns the veritices of the mesh, if the xdata and ydata were the centers of the mesh xdata and ydata are 2D matrices, which could for example be generated by np.meshgrid

Parameters
  • xdata – center of the mesh

  • ydata – center of the mesh

Returns

omfit_classes.utils_plot.map_HBS_to_RGB(H, B, S=1.0, cmap=None)[source]

map to a RGB colormap separate HUE, BRIGHTNESS and SATURATIONS arrays

Parameters
  • H – HUE data (any shape array)

  • B – BRIGHTNESS data (any shape array)

  • S – SATURATION data (any shape array)

  • cmap – matplotlib.colormap to be used

Returns

RGB array (shape of input array with one more dimension of size 3 (RGB) )

omfit_classes.utils_plot.RGB_to_HEX(R, G, B)[source]

Convert color from numerical RGB to hexadecimal representation

Parameters
  • R – integer 0<x<255 or float 0.0<x<1.0

  • G – integer 0<x<255 or float 0.0<x<1.0

  • B – integer 0<x<255 or float 0.0<x<1.0

Returns

hexadecimal representation of the color

omfit_classes.utils_plot.plotc(*args, **kwargs)[source]

Plot the various curves defined by the arguments [X],Y,[Z] where X is the x value, Y is the y value, and Z is the color. If one argument is given it is interpreted as Y; if two, then X, Y; if three then X, Y, Z. If all three are given, then it is passed to plotc and the labels are discarded. If Z is omitted, a rainbow of colors is used, with blue for the first curve and red for the last curve. A different color map can be given with the cmap keyword (see http://wiki.scipy.org/Cookbook/Matplotlib/Show_colormaps for other options). If X is omitted, then the (ith) index of Y is used as the x value.

Parameters
  • *args

  • **kwargs

Returns

omfit_classes.utils_plot.title_inside(string, x=0.5, y=0.9, ax=None, **kwargs)[source]

Write the title of a figure inside the figure axis rather than outside

Parameters
  • string – title string

  • x – x location of the title string (default 0.5, that is centered)

  • y – y location of the title string (default 0.875)

  • ax – axes to operate on

  • **kwargs – additional keywords passed to pyplot.title

Returns

pyplot.title text object

omfit_classes.utils_plot.increase_resolution(*args, **kwargs)[source]

This function takes 1 (Z) or 3 (X,Y,Z) 2D tables and interpolates them to higher resolution by bivariate spline interpolation. If 1 (3) table(s) is(are) provided, then the second(fourth) argument is the resolution increase, which can be a positive or negative integer integer: res=res0*2^n or a float which sets the grid size in the units provided by the X and Y tables

class omfit_classes.utils_plot.infoScatter(x, y, annotes, axis=None, tol=5, func=None, all_on=False, suppress_canvas_draw=False, **kw)[source]

Bases: object

improved version of: http://wiki.scipy.org/Cookbook/Matplotlib/Interactive_Plotting

Callback for matplotlib to display an annotation when points are clicked on

Parameters
  • x – x of the annotations

  • y – y of the annotations

  • annotes – list of string annotations

  • axis – axis on which to operate on (default to current axis)

  • tol – vicinity in pixels where to look for annotations

  • func – function to call with signature: func(x,y,annote,visible,axis)

  • all_on – Make all of the text visible to begin with

  • suppress_canvas_draw – Do not actively draw the canvas if all_on is True, makes plotting faster there are many subplots

  • **kw – extra keywords passed to matplotlib text class

drawAnnote(axis, x, y, annote, redraw_canvas=True)[source]

Draw the annotation on the plot

drawSpecificAnnote(annote)[source]
omfit_classes.utils_plot.infoPoint(fig=None)[source]

print x,y coordinates where the user clicks

Parameters

fig – matplotlib figure

omfit_classes.utils_plot.XKCDify(ax, mag=1.0, f1=50, f2=0.01, f3=15, bgcolor='w', xaxis_loc=None, yaxis_loc=None, xaxis_arrow='+', yaxis_arrow='+', ax_extend=0.1, expand_axes=False, ylabel_rot=78)[source]

XKCD plot generator by, Jake Vanderplas; Modified by Sterling Smith

This is a script that will take any matplotlib line diagram, and convert it to an XKCD-style plot. It will work for plots with line & text elements, including axes labels and titles (but not axes tick labels).

The idea for this comes from work by Damon McDougall

http://www.mail-archive.com/matplotlib-users@lists.sourceforge.net/msg25499.html

This adjusts all lines, text, legends, and axes in the figure to look like xkcd plots. Other plot elements are not modified.

Parameters
  • ax – Axes instance the axes to be modified.

  • mag – float the magnitude of the distortion

  • f2, f3 (f1,) – int, float, int filtering parameters. f1 gives the size of the window, f2 gives the high-frequency cutoff, f3 gives the size of the filter

  • yaxis_log (xaxis_loc,) – float The locations to draw the x and y axes. If not specified, they will be drawn from the bottom left of the plot

  • xaxis_arrow – str where to draw arrows on the x axes. Options are ‘+’, ‘-‘, ‘+-‘, or ‘’

  • yaxis_arrow – str where to draw arrows on the y axes. Options are ‘+’, ‘-‘, ‘+-‘, or ‘’

  • ax_extend – float How far (fractionally) to extend the drawn axes beyond the original axes limits

  • expand_axes – bool if True, then expand axes to fill the figure (useful if there is only a single axes in the figure)

  • ylabel_rot – float number of degrees to rotate the y axis label

omfit_classes.utils_plot.autoscale_y(ax, margin=0.1)[source]

Rescales the y-axis based on the data that is visible given the current xlim of the axis. Created by eldond at 2017 Mar 23 20:26

This function was taken from an answer by DanKickstein on stackoverflow.com http://stackoverflow.com/questions/29461608/matplotlib-fixing-x-axis-scale-and-autoscale-y-axis http://stackoverflow.com/users/1078391/danhickstein

I don’t think this function considers shaded bands such as would be used to display error bars. Increasing the margin may be a good idea when dealing with such plots.

Parameters
  • ax – a matplotlib axes object

  • margin – The fraction of the total height of the y-data to pad the upper and lower ylims

omfit_classes.utils_plot.set_linearray(lines, values=None, cmap='viridis', vmin=None, vmax=None)[source]

Set colors of lines to colormapping of values.

Other good sequential colormaps are YlOrBr and autumn. A good diverging colormap is bwr.

Parameters
  • lines (list) – Lines to set colors.

  • values (array like) – Values corresponding to each line. Default is indexing.

  • cmap (str) – Valid matplotlib colormap name.

  • vmax (float) – Upper bound of colormapping.

  • vmin (float) – Lower bound of colormapping.

Returns

ScalarMappable. A mapping object used for colorbars.

omfit_classes.utils_plot.pi_multiple(x, pos=None)[source]

Provides a string representation of x that is a multiple of the fraction pi/’denominator’.

See multiple_formatter documentation for more info.

omfit_classes.utils_plot.multiple_formatter(denominator=24, number=3.141592653589793, latex='\\pi')[source]

Returns a tick formatting function that creates tick labels in multiples of ‘number’/’denominator’.

Code from https://stackoverflow.com/a/53586826/6605826

Parameters
  • denominator – The denominator of the fraction that tick labels are created in multiples of.

  • number – The numerator of the fraction that tick labels are created in multiples of

  • latex – The latex string used to represent ‘number’

omfit_classes.utils_plot.convert_ticks_to_pi_multiple(axis=None, major=2, minor=4)[source]

Given an axis object, force its ticks to be at multiples of pi, with the labels formatted nicely […,-2pi,-pi,0,pi,2pi,…]

Parameters
  • axis – An axis object, such as pyplot.gca().xaxis

  • major – int Denominator of pi for major tick marks. 2: major ticks at 0, pi/2., pi, … Can’t be greater than 24.

  • minor – int Denominator of pi for minor tick marks. 4: minor ticks at 0, pi/4., pi/2., …

Returns

None

omfit_classes.utils_plot.is_colorbar(ax)[source]

Guesses whether a set of Axes is home to a colorbar

https://stackoverflow.com/a/53568035/6605826

Parameters

ax – Axes instance

Returns

bool True if the x xor y axis satisfies all of the following and thus looks like it’s probably a colorbar: No ticks, no tick labels, no axis label, and range is (0, 1)

omfit_classes.utils_plot.tag_plots_abc(fig=None, axes=None, corner=[1, 1], font_size='medium', skip_suspected_colorbars=True, start_at=0, **annotate_kw)[source]

Tag plots with (a), (b), (c), …

Parameters
  • fig – Specify a figure instance instead of letting the function pick the most recent one

  • axes – Specify a plot axes instance or list/array of plot axes instances instead of letting the function use fig.get_axes()

  • corner – Which corner does the tag go in? [0, 0] for bottom left, [1, 0] for bottom right, etc.

  • font_size – Font size of the annotation.

  • skip_suspected_colorbars – bool Try to detect axes which are home to colorbars and skip tagging them. An Axes instance is suspected of having a colorbar if either the xaxis or yaxis satisfies all of these conditions: - Length of tick list is 0 - Length of tick label list is 0 - Length of axis label is 0 - Axis range is (0,1)

  • start_at – int Offset value for skipping some numbers. Useful if you aren’t doing real subfigs, but two separate plots and placing them next to each other in a publication. Set to 1 to start at (b) instead of (a), for example.

  • annotate_kw – dict Additional keywords passed to annotate(). Keywords used by settings such as corner, etc. will be overriden.

omfit_classes.utils_plot.mark_as_interactive(ax, interactive=True)[source]

Mark an axis as interactive or not

Parameters
  • ax – axis

  • interactive – boolean

Returns

axis

class omfit_classes.utils_plot.View1d(data, coords=None, dims=None, name=None, dim=None, axes=None, dynamic_ylim=False, use_uband=False, cornernote_options=None, plot_options=None, **indexers)[source]

Bases: object

Plot 2D or 3D data as line-plots with interactive navigation through the alternate dimensions. Navigation uses the 4 arrow keys to traverse up to 2 alternate dimensions.

The data must be on a regular grid, and is formed into a xarray DataArray if not already.

Uses matplotlib line plot for float/int data, OMFIT uerrrorbar for uncertainty variables.

Examples:

The view1d can be used to interactively explore data. For usual arrays it draws line slices.

>> t = np.arange(20) >> s = np.linspace(0,2*np.pi,60) >> y = np.sin(np.atleast_2d(s).T+np.atleast_2d(t)) >> da = xarray.DataArray(y,coords=SortedDict([(‘space’,s),(‘time’,t)]),name=’sine’) >> v = View1d(da.transpose(‘time’,’space’),dim=’space’,time=10)

For uncertainties arrays, it draws errorbars using the uerrorbar function. Multiple views with the same dimensions can be linked for increased speed (eliminate redundant calls to redraw).

>> y_u = unumpy.uarray(y+(random(y.shape)-0.5),random(y.shape)) >> da_u = xarray.DataArray(y_u,coords=SortedDict([(‘space’,s),(‘time’,t)]),name=’measured’) >> v_u = View1d(da_u,dim=’space’,time=10,axes=pyplot.gca()) >> v.link(v_u) # v will remain connected to keypress events and drive vu

Variable dependent axis data can be viewed if x and y share a regular grid in some coordinates,

>> x = np.array([s+(random(s.shape)-0.5)*0.2 for i in t]).T >> da_x = xarray.DataArray(x,coords=SortedDict([(‘space’,s),(‘time’,t)]),name=’varspace’) >> ds = da_u.to_dataset().merge(da_x.to_dataset()) >> v_x = View1d(ds,name=’measured’,dim=’varspace’,time=10,axes=pyplot.gca()) >> v.link(v_x)

Parameters
  • data – DataArray or array-like 2D or 3D data values to be viewed.

  • coords – dict-like Dictionary of Coordinate objects that label values along each dimension.

  • dims – tuple Dimension names associated with this array.

  • name – string Label used in legend. Empty or beginning with ‘_’ produces no legend label. If the data is a DataArray it will be renamed before plotting. If the data is a Dataset, the name specifies which of its existing data_vars to plot.

  • dim – string, DataArray Dimension plotted on x-axis. If DataArray, must have same dims as data.

  • axes – Axes instance The axes plotting is done in.

  • dynamic_ylim – bool Re-scale y limits of axes when new slices are plotted.

  • use_uband – bool Use uband instead of uerrorbar to plot uncertainties variables.

  • cornernote_options – dict Key word arguments passed to cornernote (such as root, shot, device). If this is present, then cornernote will be updated with the new time if there is only one time to show, or the time will be erased from the cornernote if more than one time is shown by this View1d instance (such as by freezing one slice).

  • plot_options – dict Key word arguments passed to plot/uerrorbar/uband.

  • **indexers – dict Dictionary with keys given by dimension names and values given by arrays of coordinate index values. Must include all dimensions other, than the fundamental.

use_uband(use=True)[source]

Toggle use of uband instead of uerrorbar for plotting function

set_label_indexers_visible(visible=True)[source]

Include the current indexers in line labels.

key_command(event, draw=True, **plot_options)[source]

Use arrows to navigate up to 2 extra dimensions by incrementing the slice indexes.

Use w/e to write/erase slices to persist outside on navigation.

isel(draw=True, **indexers)[source]

Re-slice the data along its extra dimensions using indexes.

Link all actions in this view to the given view.

Parameters
  • view – View1d. A view that will be driven by this View’s key press responses.

  • disconnect – bool. Disconnect key press events from driven view.

Unlink actions in this view from controlling the given view.

class omfit_classes.utils_plot.View2d(data, coords=None, dims=None, name=None, axes=None, quiet=False, use_uband=False, contour_levels=0, imag_options={}, plot_options={'ls': '-', 'marker': ''}, **indexers)[source]

Bases: object

Plot 2D data with interactive slice viewers attached to the 2D Axes. Left clicking on the 2D plot refreshes the line plot slices, right clicking overplots new slices.

The original design of this viewer was for data on a rectangular grid, for which x and y are 1D arrays defining the axes but may be irregularly spaced. In this case, the line plot points correspond to the given data. If x or y is a 2D array, the data is assumed irregular and interpolated to a regular grid using scipy.interpolate.griddata.

Example:

Explore a basic 2D np array without labels,

>> x = np.linspace(-1, 1, 200) >> y = np.linspace(-2, 2, 200) >> xx, yy = meshgrid(x, y) >> z = np.exp(-xx**2 - yy**2) >> v = View2d(z)

To add more meaningful labels to the axes and data do,

>> v = View2d(z, coords={‘x’:x, ‘y’:y}, dims=(‘x’, ‘y’), name=’wow’)

or use a DataArray,

>> d = DataArray(z, coords={‘x’:x, ‘y’:y}, dims=(‘x’, ‘y’), name=’wow’) >> v = View2d(d)

Note that the coordinates should be 1D. Initializing a view with regular grid, 2D coordinates will result in an attempt to slice them appropriately. This is done for consistency with some matplotlib 2D plotting routines, but is not recommended.

>> v = View2d(z, coords=dict(x=x, y=y), dims=(‘x’, ‘y’))

If you have irregularly distributed 2D data, it is recomended that you first interpolate it to a 2D grid in whatever way is most applicable. If you do not, initializing a view will result in an attempt to linearly interpolate to a automatically chosen grid.

>> x = np.random.rand(1000) >> y = np.random.rand(1000) * 2 >> z = np.exp(-x**2 - y**2) >> v = View2d(z, coords=dict(x=x, y=y), dims=(‘x’, ‘y’))

The same applies for 2D collections of irregular points and values.

>> x = x.reshape((50, 20)) >> y = y.reshape((50, 20)) >> z = z.reshape((50, 20)) >> v = View2d(z, coords=[(‘x’, x), (‘y’, y)], dims=(‘x’, ‘y’))

Parameters
  • data – DataArray or array-like 2D or 3D data values to be viewed.

  • coords – dict-like Dictionary of Coordinate objects that label values along each dimension.

  • dims – tuple Dimension names associated with this array.

  • name – string Label used in legend. Empty or begining with ‘_’ produces no legend label.

  • dim – string, DataArray Dimension plotted on x-axis. If DataArray, must have same dims as data.

  • axes – Axes instance The axes plotting is done in.

  • quiet – bool Suppress printed messages.

  • use_uband – bool Use uband for 1D slice plots instead of uerrorbar.

  • contour_levels – int or np.ndarray Number of or specific levels used to draw black contour lines over the 2D image.

  • imag_options – dict Key word arguments passed to the DataArray plot method (modified pcolormesh).

  • plot_options – dict Key word arguments passed to plot or uerrorbar. Color will be determined by cmap variable.

  • **indexers – dict Dictionary with keys given by dimension names and values given by arrays of coordinate index values.

use_uband(use=True)[source]

Toggle use of uband instead of uerrorbar in 1D slice plots

key_navigate_cuts(key_press)[source]
set_data(data=None, **kw)[source]

Set base data and axes, as well as displayed values. Call with no arguments re-sets to interactively displayed values (use after der(), or int()).

Parameters

data (DataArray) – 2D array of data.

Returns

The main 2d matplotlib.collections.QuadMesh from pcolormesh

der(axis=0)[source]

Set 2D values to derivative of data along specified axis.

Parameters

axis (int) – Axis along which derivative is taken (0=horizontal, 1=vertical).

Returns

The main 2d matplotlib.collections.QuadMesh from pcolormesh

int(axis=0)[source]

Set 2D values to derivative of data along specified axis.

Parameters

axis (int) – Axis along which integration is taken (0=horizontal, 1=vertical).

Returns

The main 2d matplotlib.collections.QuadMesh from pcolormesh

toggle_log()[source]

Toggle log/linear scaling of data in 2D plot.

vslice(xmin, xmax, std=False, draw=True, force=False, **kw)[source]

Plot line collection of x slices.

Parameters
  • xmin (float) – Lower bound of slices dispayed in line plot.

  • xmax (float) – Upper bound of slices dispayed in line plot.

  • std (bool) – Display mean and standard deviation instead of individual slices.

  • draw (bool) – Redraw the figure canvas

  • Force – Re-slice even if arguments are identical to last slice.

  • **kw – Extra key words are passed to the 1D plot() function

Returns

Possibly modified (xmin,xmax,std)

Return type

tuple

hslice(ymin, ymax, std=False, draw=True, force=False, **kw)[source]

Plot line collection of y slices.

Parameters
  • ymin (float) – Lower bound of slices dispayed in line plot.

  • ymax (float) – Upper bound of slices dispayed in line plot.

  • std (bool) – Display mean and standard deviation instead of individual slices.

  • draw (bool) – Redraw the figure canvas

  • Force – Re-slice even if arguments are identical to last slice.

  • **kw – Extra key words are passed to the 1D plot() function

Returns

Possibly modified (ymin,ymax,std)

Return type

tuple

line_select_callback(eclick, erelease=None)[source]

Call vslice and hslice for the range of x and y spanned by the rectangle between mouse press and release. Called by RectangleSelector.

Parameters
  • eclick (matplotlib Event) – Matplotlib mouse click.

  • erelease (matplotlib Event) – Matplotlib mouse release.

Returns

None

toggle_selector(event)[source]

Connected to key press events to turn on (a) or off (q) selector.

Parameters

event (matplotlib event) – key press event.

Returns

None

Link all actions in this view to the given view.

Unlink actions in this view from controlling the given view.

class omfit_classes.utils_plot.View3d(data, coords=None, dims=None, name=None, axes=None, use_uband=False, quiet=False, contour_levels=0, imag_options={}, plot_options={'ls': '-', 'marker': ''}, **indexers)[source]

Bases: omfit_classes.utils_plot.View2d

View 3D data by scrolling through 3rd dimension in a View2d plot.

Parameters
  • data – DataArray or array-like 2D or 3D data values to be viewed.

  • coords – dict-like Dictionary of Coordinate objects that label values along each dimension.

  • dims – tuple Dimension names associated with this array.

  • name – string Label used in legend. Empty or begining with ‘_’ produces no legend label.

  • dim – string, DataArray Dimension plotted on x-axis. If DataArray, must have same dims as data.

  • axes – Axes instance The axes plotting is done in.

  • quiet – bool Suppress printed messages.

  • use_uband – bool Use uband for 1D slice plots instead of uerrorbar.

  • contour_levels – int or np.ndarray Number of or specific levels used to draw black contour lines over the 2D image.

  • imag_options – dict Key word arguments passed to the DataArray plot method (modified pcolormesh).

  • plot_options – dict Key word arguments passed to plot or uerrorbar/uband. Color will be determined by cmap variable.

  • **indexers – dict Dictionary with keys given by dimension names and values given by arrays of coordinate index values.

set_3d(x2, index=False, draw=True)[source]

Set third dimension of view to value nearest to slice.

Parameters
  • x2 (float or int) – Slice in third dimension. Type depends on value of index.

  • index (bool) – Set True if x2 is the integer index

  • draw (bool) – If True, redraw the canvas

Returns

None

class omfit_classes.utils_plot.DragPoints(yArray, xArray=None, eyArray=None, exArray=None, editY=True, editX=True, exDelta=1, eyDelta=1, sorted=False, continuous=True, showOriginal=False, func=None, fargs=[], cyArray=None, cxArray=None, onMove=None, resize=False, show_hints=True, ax=None, **kwargs)[source]

Bases: object

This class is used to define matplotlib draggable arrays

Parameters
  • yArray – location in the OMFIT tree for the y array

  • xArray – location in the OMFIT tree for the x array

  • eyArray – location in the OMFIT tree for the x error array

  • exArray – location in the OMFIT tree for the y error array

  • editY – allow dragging of points along Y axis

  • editX – allow dragging of points along X axis

  • exDelta – increments of exArray

  • eyDelta – increments of eyArray

  • editX – allow dragging of points along X axis

  • sorted – keep points sorted in x

  • continuous – draw continuously even while user is dragging (may be good to disable if takes long time)

  • showOriginal – show original points

  • func

    a function with signature like func(x,y,motion,fargs=[]). where: * x : x coordinate of control points

    • y : y coordinate of control points

    • motion : True if the user has dragged a point

    This function must return x_, y_, x, y where:

    • x_ : interpolating points between x coordinate of control points

    • y_ : interpolating points between y coordinate of control points

    • x : x coordinate of control points

    • y : y coordinate of control points

  • fargs – arguments to the function

  • cyArray – location in the OMFIT tree for the y interpolation array

  • cxArray – location in the OMFIT tree for the x interpolation array

  • onMove – function handle to call whenever control point is moved

  • resize – boolean to whether update axes size when drag point gets to an edge for the figure

  • show_hints – bool. Show a cornernote with tips for interaction.

  • ax – Axes. Axes in which to plot.

All other key word arguments passed to the matplotlib plot function.

Returns

epsilon = 5
draw_callback(event=None)[source]
get_ind_under_point(event)[source]
button_press_callback(event)[source]
key_press_callback(event=None)[source]
button_release_callback(event)[source]
motion_notify_callback(event)[source]
omfit_classes.utils_plot.editProfile(yi, xi=None, n=None, showOriginal=True, func='spline', onMove=None)[source]

This function opens an interactive figure for convenient editing of profiles via spline

Parameters
  • yi – string whose eval yields the y data

  • xi – string whose eval yields the x data

  • n – number of control points

  • showOriginal – plot original data

  • func – interpolation function used to interpolate between control points ‘linear’, ‘spline’, ‘pchip’, ‘circular’

  • onMove – function to call when moving of control points occurs

  • resize – boolean to whether update axes size when drag point gets to an edge for the figure

Returns

DragPoints object

omfit_classes.utils_plot.cornernote(text='', root=None, device=None, shot=None, time=None, ax=None, fontsize='small', clean=True, remove=False, remove_specific=False)[source]

Write text at the bottom right corner of a figure

Parameters
  • text – text to appear in the bottom left corner

  • root

    • if ‘’ append nothing

    • if None append shot/time as from OMFIT[‘MainSettings’][‘EXPERIMENT’]

    • if OMFITmodule append shot/time as from root[‘SETTINGS’][‘EXPERIMENT’]

  • device – override device string (does not print device at all if empty string)

  • shot – override shot string (does not print shot at all if empty string)

  • time – override time string (does not print time at all if empty string)

  • ax – axis to plot on

  • fontsize – str or float. Sets font size of the Axes annotate method.

  • clean – delete existing cornernote(s) from current axes before drawing a new cornernote

  • remove – delete existing cornernote(s) and return before drawing any new ones

  • remove_specific – delete existing cornernote(s) from current axes only if text matches the text that would be printed by the current call to cornernote() (such as identical shot, time, etc.)

Returns

Matplotlib annotate object

class omfit_classes.utils_plot.axdline(slope=1, intercept=0, *args, **kwargs)[source]

Bases: matplotlib.lines.Line2D

Draw a line based on its slope and y-intercept. Additional arguments are passed to the <matplotlib.lines.Line2D> constructor.

From stackoverflow anser by ali_m: http://stackoverflow.com/a/14348481/6605826 Originally named ABLine2D

Create a .Line2D instance with x and y data in sequences of xdata, ydata.

Additional keyword arguments are .Line2D properties:

Properties:

agg_filter: a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array alpha: float or None animated: bool antialiased or aa: bool clip_box: .Bbox clip_on: bool clip_path: Patch or (Path, Transform) or None color or c: color contains: unknown dash_capstyle: {‘butt’, ‘round’, ‘projecting’} dash_joinstyle: {‘miter’, ‘round’, ‘bevel’} dashes: sequence of floats (on/off ink in points) or (None, None) data: (2, N) array or two 1D arrays drawstyle or ds: {‘default’, ‘steps’, ‘steps-pre’, ‘steps-mid’, ‘steps-post’}, default: ‘default’ figure: .Figure fillstyle: {‘full’, ‘left’, ‘right’, ‘bottom’, ‘top’, ‘none’} gid: str in_layout: bool label: object linestyle or ls: {‘-‘, ‘–’, ‘-.’, ‘:’, ‘’, (offset, on-off-seq), …} linewidth or lw: float marker: marker style string, ~.path.Path or ~.markers.MarkerStyle markeredgecolor or mec: color markeredgewidth or mew: float markerfacecolor or mfc: color markerfacecoloralt or mfcalt: color markersize or ms: float markevery: None or int or (int, int) or slice or List[int] or float or (float, float) or List[bool] path_effects: .AbstractPathEffect picker: unknown pickradius: float rasterized: bool or None sketch_params: (scale: float, length: float, randomness: float) snap: bool or None solid_capstyle: {‘butt’, ‘round’, ‘projecting’} solid_joinstyle: {‘miter’, ‘round’, ‘bevel’} transform: matplotlib.transforms.Transform url: str visible: bool xdata: 1D array ydata: 1D array zorder: float

See set_linestyle() for a description of the line styles, set_marker() for a description of the markers, and set_drawstyle() for a description of the draw styles.

omfit_classes.utils_plot.square_subplots(nplots, ncol_max=inf, flip=False, sparse_column=True, just_numbers=False, identify=False, fig=None, **kw)[source]

Creates a set of subplots in an approximate square, with a few empty subplots if needed

Parameters
  • nplots – int Number of subplots desired

  • ncol_max – int Maximum number of columns to allow

  • flip – bool True: Puts row 0 at the bottom, so every plot on the bottom row can accept an X axis label False: Normal plot numbering with row 0 at the top. The bottom row may be sparsely populated.

  • sparse_column

    bool Controls the arrangement of empty subplots. True: the last column is sparse. That is, all the empty plots will be in the last column. There will be at most

    one plot missing from the last row, and potentially several from the last column. The advantage is this provides plenty of X axes on the bottom row to accept labels. To get natural numbering of flattened subplots, transpose before flattening: axs.T.flatten(), or just use the 1D axsf array that’s returned.

    False: the last row is sparse. All the empty plots will be in the last row. The last column will be missing at

    most one plot, but the last row may be missing several. This arrangement goes more smoothly with the numbering of axes after flattening.

  • just_numbers – bool Don’t create any axes, but instead just return the number of rows, columns, and empty subplots in the array.

  • identify – bool For debugging: write the number (as flattened) and [row, col] coordinates of each subplot on the plot itself. These go in the center, in black. In the top left corner in red is the naive flattened count, which will appear on empty plots as well to show how wrong it is. In the bottom right corner in blue is the proper flattened count based on axsf.

  • fig – Figure instance [optional]

  • **kw – keywords passed to pyplot.subplots when creating axes (like sharex, etc.)

Returns

(axs, axsf) or (nr, nc, on, empty) axs: 2d array of Axes instances. It is flipped vertically relative to normal axes output by pyplot.subplots,

so the 0th row is the bottom. This is so the bottom row will be populated and can receive x axis labels.

axsf: 1d array of Axes instances, leaving out the empty ones (they might not be in order nicely) empty: int: number of empty cells in axs.

The first empty, if there is one, is [-1, -1] (top right), then [-1, -2] (top row, 2nd from the right), etc.

nr: int: number of rows nc: int: number of columns on: 2d bool array: flags indicating which axes should be on (True) and which should be hidden/off (False)

utils_fit

omfit_classes.utils_fit.autoknot(x, y, x0, evaluate=False, minDist=None, minKnotSpacing=0, s=3, w=None, allKnots=False, userFunc=None, *args, **kwargs)[source]

This function returns the optimal location of the inner-knots for a nth degree spline interpolation of y=f(x)

Parameters
  • x – input x array

  • y – input y array

  • x0 – initial knots distribution (list) or number of knots (integer)

  • s – order of the spline

  • w – input weights array

  • allKnots – returns all knots or only the central ones exluding the extremes

  • userFunc – autoknot with user defined function with signature y0=userFunc(x,y)(x0)

  • minDist – a number between >0 and infinity (though usually <1), which sets the minimum distance between knots. If small knots will be allowed to be close to one another, if large knots will be equispaced. Use None to automatically determine this parameter based on: 0.01*len(knots) If minDist is a string than it will be evaluated (the knots locations in the string can be accessed as knots).

  • minKnotSpacing – a number in x input units that denotes the minimal inter-knot space that autoknot should aim for. It shows up as an addition term in the cost function and is heavily punished against. If x is too large it will force autoknow to output evenly spaced knots. Default to 0. (ie no limit)

Returns

x0 optimal location of inner knots to spline interpolate y=f(x) f1=interpolate.LSQUnivariateSpline(x,y,x0,k=s,w=w)

class omfit_classes.utils_fit.knotted_fit_base(x, y, yerr)[source]

Bases: object

The base class for the types of fits that have free knots and locations

Does basic checking for x,y,yerr then stores them in self.x, self.y, self.yerr such that x is monotonically increasing

kw_vars = ['min_slope', 'monotonic', 'min_dist', 'first_knot', 'knots', 'fixed_knots', 'fit_SOL', 'outliers']
kw_defaults = [None, False, 0, None, 3, False, False, 3]
fit_knot_range(knots_range, **kw)[source]

Try a range of number of knots Stop the loop if the current number of knots did no better than the previous best

Parameters
  • knots_range – A tuple, (min_num_knots, max_num_knots) passed to range

  • **kw – The keywords passed to fit_single

get_tag(**kw)[source]

Get the tag for the settings given by **kw

Parameters

**kw – The Fit Keywords documented in the __init__ method

fit_single(**keyw)[source]

Perform a single fit for the given **keyw

Parameters

**keyw – The Fit Keywords documented in the __init__ method

Returns

The lmfit.MinimizerResult instance for the fit

The fit is also stored in self.fits[self.get_tag(**keyw)]

restore_orig_data()[source]

Restore the original data

get_zk(params)[source]

Return the array of knot values from the lmfit.Parameters object

get_xk(params)[source]

Return the array of knot locations from the lmfit.Parameters object

valid_xk(params)[source]
Parameters

params – An lmfit.Paramaters object to check

Returns

True if np.all(min_dist< xk_i-xk_(i-1)))

residual(params)[source]

Return the array of np.sqrt(((ymodel-y)/yerr)**2) given the lmfit.Parameters instance

Developers note: For the default data set tested, choosing the sqrt of the square did better than the signed residual

get_param_unc(lmfit_out)[source]

Get parameters with correlated uncertainties

Parameters

lmfit_outlmfit.MinimizerResult instance

Returns

tuple of xk, zk, ybc, xbc

plot(**kw)[source]

Plot all fits calculated so far, each in its own tab of a FigureNotebook, where the tab is labeled by the shortened tag of the tag of the fit

Parameters

**kw – Dictionary passed to self.plot_individual_fit

Returns

The FigureNotebook instance created

short_tag(tag)[source]

Return a shortened version of the tag

plot_individual_fit(tag, ax=None, x=array([0.0, 0.0011, 0.0022, ..., 1.0978, 1.0989, 1.1]))[source]

Plot a single fit, characterized by tag

Parameters
  • tag – The tag of the fit that is to be plotted, must be in self.fits.keys()

  • ax – The axes to plot into (one is created if None)

  • x – The x values to use for plotting the fitted curve

has_large_xerrorbars(lmfit_out)[source]
Parameters

lmfit_out – A lmfit.MinimizerResult object

Returns

True if the errorbars of the knot locations are larger than the distance between knots

has_large_errorbars(lmfit_out, verbose=False)[source]
Parameters

lmfit_out – A lmfit.MinimizerResult object

Returns

True if any of the following are False:

  1. the errorbars of the knot locations are smaller than the distance between the knots

  2. the errorbars in the fit at the data locations is not larger than the range in data

  3. the errorbars in the fit at the data locations is not larger than the fit value at that location, if the data are all of one sign

get_best_fit(verbose=False, allow_no_errorbar=None)[source]

Figure out which is the best fit so far

The best fit is characterized as being the fit with the lowest reduced chi^2 that is valid. The definition of valid is

  1. the knots are in order

  2. the knots are at least min_dist apart

  3. the errorbars on the fit parameters were able to be determined

  4. the errorbars of the knot locations are smaller than the distance between the knots

  5. the errorbars in the fit at the data locations is not larger than the range in data

  6. the errorbars in the fit at the data locations is not larger than the fit value at that location

Parameters
  • verbose – If True, print the tag and reduced chi2 of all fits

  • allow_no_errorbar – If True, if there is no valid fit found with errorbars, return the best fit without errorbars

Returns

A tuple of (best_tag, best_reduced_chi2)

plot_best_fit(**kw)[source]

A convenience function for plotting the best fit

class omfit_classes.utils_fit.fitSL(x, y, yerr, knots=3, min_dist=0, first_knot=None, fixed_knots=False, fit_SOL=False, monotonic=False, min_slope=None, outliers=3, plot_best=False, allow_no_errorbar=False)[source]

Bases: omfit_classes.utils_fit.knotted_fit_base

Fit a profile of data using integrated scale lengths, ideally obtaining uncertainties in the fitting parameters.

Due to the nature of integrating scale lengths, this fitter is only good for data > 0 or data < 0.

Examples

>>> pkl = OMFITpickle(OMFITsrc+'/../samples/data_pickled.pkl')
>>> x = pkl['x'][0,:]
>>> y = pkl['y'][0,:]
>>> yerr = pkl['e'][0,:]
>>> fit = fitSL(x,y,yerr,fixed_knots=True,knots=-7,plot_best=True)

Along the way of obtaining the fit with the desired parameters, other intermediate fits may be obtained. These are stored in the fits attribute (a dict), the value of whose keys provide an indication of how the fit was obtained, relative to the starting fit. For instance, to provide a variable knot fit, a fixed knot (equally spaced) fit is performed first. Also an initial fit is necessary to know if there are any outliers, and then the outliers can be detected. The get_best_fit method is useful for determing which of all of the fits is the best, meaning the valid fit with the lowest reduced chi^2. Here valid means

  1. the knots are in order

  2. the knots are at least min_dist apart

  3. the errorbars on the fit parameters were able to be determined

  4. the errorbars of the knot locations are smaller than the distance between the knots

Note that 1) and 2) should be satisfied by using lmfit Parameter constraints, but it doesn’t hurt to double check :-)

Developer note: If the fitter is always failing to find the errorbars due to tolerance problems, there are some tolerance keywords that can be passed to lmfit.minimize: xtol, ftol, gtol that could be exposed.

Initialize the fitSL object, including calculating the first fit(s)

Parameters
  • x – The x values of the data

  • y – The values of the data

  • yerr – The errors of the data

Fit Keywords:

Parameters
  • knots

    • Positive integer: Use this number of knots as default (>=3)

    • Negative integer: Invoke the fit_knot_range method for the range (3,abs(knots))

    • list-like: Use this list as the starting point for the knot locations

  • min_dist – The minimum distance between knot locations * min_dist > 0 (faster) enforced by construction * min_dist < 0 (much slower) enforced with lmfit

  • first_knot – The first knot can be constrained to be above first_knot. The default is above min(x)+min_dist (The zeroth knot is at 0.)

  • fixed_knots – If True, do not allow the knot locations to change

  • fit_SOL – If True, include data points with x>1

  • monotonic – If True, only allow positive scale lengths

  • min_slope – Constrain the scale lengths to be above min_slope

  • outliers – Do an initial fit, then throw out any points a factor of outliers standard deviations away from the fit

Convenience Keywords:

Parameters
  • plot_best – Plot the best fit

  • allow_no_errorbar – If True, get_best_fit will return the best fit without errorbars if no valid fit with errorbars exists

build_params(**keyw)[source]

Build the lmfit.Parameters object needed for the fitting

Parameters

**keyw – The Fit Keywords documented in the __init__ method

Returns

The lmfit.Parameters translation of the settings given by **keyw

model(params, x, lmfit_out=None)[source]

Return the model integrated scale length curve at x

Parameters
  • params – The lmfit.Parameters object

  • x – evaluate model at x

  • lmfit_outlmfit.MinimizerResult instance to use for getting uncertainties in the curve

plot(showZ=True, x=array([0.0, 0.01, 0.02, 0.03, 0.04, 0.05, 0.06, 0.07, 0.08, 0.09, 0.1, 0.11, 0.12, 0.13, 0.14, 0.15, 0.16, 0.17, 0.18, 0.19, 0.2, 0.21, 0.22, 0.23, 0.24, 0.25, 0.26, 0.27, 0.28, 0.29, 0.3, 0.31, 0.32, 0.33, 0.34, 0.35, 0.36, 0.37, 0.38, 0.39, 0.4, 0.41, 0.42, 0.43, 0.44, 0.45, 0.46, 0.47, 0.48, 0.49, 0.5, 0.51, 0.52, 0.53, 0.54, 0.55, 0.56, 0.57, 0.58, 0.59, 0.6, 0.61, 0.62, 0.63, 0.64, 0.65, 0.66, 0.67, 0.68, 0.69, 0.7, 0.71, 0.72, 0.73, 0.74, 0.75, 0.76, 0.77, 0.78, 0.79, 0.8, 0.81, 0.82, 0.83, 0.84, 0.85, 0.86, 0.87, 0.88, 0.89, 0.9, 0.91, 0.92, 0.93, 0.94, 0.95, 0.96, 0.97, 0.98, 0.99, 1.0, 1.01, 1.02, 1.03, 1.04, 1.05, 1.06, 1.07, 1.08, 1.09, 1.1]))[source]

Plot all fits calculated so far, each in its own tab of a FigureNotebook, where the tab is labeled by the shortened tag of the tag of the fit

Parameters
  • showZ – Overplot the values of the inverse scale lengths in red

  • x – The x values to use for plotting the fitted curve

Returns

The FigureNotebook instance created

plot_individual_fit(tag, ax=None, showZ=True, x=array([0.0, 0.0011, 0.0022, ..., 1.0978, 1.0989, 1.1]))[source]

Plot a single fit, characterized by tag

Parameters
  • tag – The tag of the fit that is to be plotted, must be in self.fits.keys()

  • ax – The axes to plot into (one is created if None)

  • showZ – Overplot the values of the inverse scale lengths in red

  • x – The x values to use for plotting the fitted curve

class omfit_classes.utils_fit.fitSpline(x, y, yerr, knots=3, min_dist=0, first_knot=None, fixed_knots=False, fit_SOL=False, monotonic=False, min_slope=None, outliers=3, plot_best=False, allow_no_errorbar=False)[source]

Bases: omfit_classes.utils_fit.knotted_fit_base

Fit a spline to some data; return the fit with uncertainties

Does basic checking for x,y,yerr then stores them in self.x, self.y, self.yerr such that x is monotonically increasing

build_params(**keyw)[source]

Build the lmfit.Parameters object needed for the fitting

Parameters

**keyw – The Fit Keywords documented in the __init__ method

Returns

The lmfit.Parameters translation of the settings given by **keyw

model(params, x, lmfit_out=None)[source]

Return the model spline curve at x

Parameters
  • params – The lmfit.Parameters object

  • x – evaluate model at x

  • lmfit_outlmfit.MinimizerResult instance to use for getting uncertainties in the curve

omfit_classes.utils_fit.xy_outliers(x, y, cutoff=1.2, return_valid=False)[source]

This function returns the index of the outlier x,y data useful to run before doing a fit of experimental data to remove outliers. This function works assuming that the first and the last samples of the x/y data set are valid data points (i.e. not outliers).

Parameters
  • x – x data (e.g. rho)

  • y – y data (e.g. ne)

  • cutoff – sensitivity of the cutoff (smaller numbers -> more sensitive [min=1])

  • return_valid – if False returns the index of the outliers, if True returns the index of the valid data

Returns

index of outliers or valid data depending on return_valid switch

class omfit_classes.utils_fit.fitGP(xx, yy, ey, noise_ub=2.0, random_starts=20, zero_value_outside=True, ntanh=1, verbose=False)[source]

Bases: object

x: array or array of arrays

Independent variable data points.

y: array or array of arrays

Dependent variable data points.

e: array or array of arrays

Uncertainty in the dependent variable data points.

noise_ub: float, optional

Upper bound on a multiplicative factor that will be optimized to infer the most probable systematic underestimation of uncertainties. Note that this factor is applied over the entire data profile, although diagnostic uncertainties are expected to be heteroschedastic. Default is 2 (giving significant freedom to the optimizer).

random_starts: int, optional

Number of random starts for the optimization of the hyperparameters of the GP. Each random starts begins sampling the posterior distribution in a different way. The optimization that gives the largest posterior probability is chosen. It is recommended to increase this value if the fit results difficult. If the regression fails, it might be necessary to vary the constraints given in the _fit method of the class GPfit2 below, which has been kept rather general for common usage. Default is 20.

zero_value_outside: bool, optional

Set to True if the profile to be evaluated is expected to go to zero beyond the LCFS, e.g. for electron temperature and density; note that this option does NOT force the value to be 0 at the LCFS, but only attempts to constrain the fit to stabilize to 0 well beyond rho=1. Profiles like those of omega_tor_12C6 and T_12C6 are experimentally observed not to go to 0 at the LCFS, so this option should be set to False for these. Default is True.

ntanh: integer, optional

Set to 2 if an internal transport barrier is expected. Default is 1 (no ITB expected). This parameter has NOT been tested recently.

verbose: bool, optional

If set to True, outputs messages from non-linear kernel optimization. Default is False.

(object) fit: call at points at which the profile is to be evaluated, e.g. if locations are stored in

an array ``xo’’, call fo = fit(xo). For an example, see 7_fit.py in OMFITprofiles.

plot(profile=None, ngauss=1)[source]

Function to conveniently plot the input data and the result of the fit. Inputs: ———– Profile: int, optional

Profile to evaluate if more than one has been computed and included in the gp object. To call the nth profile, set profile=n. If None, it will return an array of arrays.

ngauss: int, optional

Number of shaded standard deviations

None

class omfit_classes.utils_fit.fitCH(x, y, yerr, m=18)[source]

Bases: object

Fitting of kinetic profiles by Chebyshev polynomials Adapted from MATLAB function by A. Marinoni <marinoni@fusion.gat.com>

Parameters
  • x – radial coordinate

  • y – data

  • yerr – data uncertainties

  • m – Polynomial degree

plot()[source]

Plotting of the raw data and fit with uncertainties

Returns

None

class omfit_classes.utils_fit.fitLG(x, y, e, d, ng=100, sm=1, nmax=None)[source]

Bases: object

This class provides linear fitting of experimental profiles, with gaussian bluring for smoothing.

This procedure was inspired by discussions with David Eldon about the Weighted Average of Interpolations to a Common base (WAIC) technique that he describes in his thesis. However the implementation here is quite a bit different, in that instead of using a weighted average the median profile is taken, which allows for robust rejection of outliers. In this implementation the profiles smoothing is obtained by radially perturbing the measurements based on the farthest distance to their neighboring point.

Parameters
  • x – x values of the experimental data

  • y – experimental data

  • e – uncertainties of the experimental data

  • d – data time identifier

  • ng – number of gaussian instances

  • sm – smoothing

  • nmax – take no more than nmax data points

plot(variations=True)[source]
omfit_classes.utils_fit.mtanh(c, x, y=None, e=1.0, a2_plus_a3=None)[source]

Modified tanh function

>> if len(c)==6: >> y=a0*(a1+tanh((a2-x)/a3))+a4*(a5-x)*(x<a5)

a0: scale a1: offset in y a2: shift in x a3: width a4: slope of linear a5: when linear part takes off

>> if len(c)==5: >> y=a0*(a1+tanh((a2-x)/a3))+a4*(a2-a3-x)*(x<a2-a3)

a0: scale a1: offset in y a2: shift in x a3: width a4: slope of linear

Parameters
  • c – array of coefficients [a0,a1,a2,a3,a4,(a5)]

  • x – x data to fit

  • y – y data to fit

  • e – y error of the data to fit

  • a2_plus_a3 – force sum of a2 and a3 to be some value NOTE: In this case a3 should be removed from input vector c

Returns

cost, or evaluates y if y==None

omfit_classes.utils_fit.mtanh_gauss_model(x, bheight, bsol, bpos, bwidth, bslope, aheight, awidth, aexp)[source]

Modified hyperbolic tangent function for fitting pedestal with gaussian function for the fitting of the core. Stefanikova, E., et al., RewSciInst, 87 (11), Nov 2016 This function is design to fit H-mode density and temeprature profiles as a function of psi_n.

omfit_classes.utils_fit.tanh_model(x, a1, a2, a3, c)[source]
omfit_classes.utils_fit.tanh_poly_model(x, a1, a2, a3, c, p2, p3, p4)[source]
class omfit_classes.utils_fit.fit_base(x, y, yerr)[source]

Bases: object

get_param_unc(lmfit_out)[source]
property uparams
property params
valid_fit(model_out)[source]
class omfit_classes.utils_fit.fit_mtanh_gauss(x, y, yerr, **kw)[source]

Bases: omfit_classes.utils_fit.fit_base

class omfit_classes.utils_fit.fit_tanh(x, y, yerr, **kw)[source]

Bases: omfit_classes.utils_fit.fit_base

class omfit_classes.utils_fit.fit_tanh_poly(x, y, yerr, **kw)[source]

Bases: omfit_classes.utils_fit.fit_base

omfit_classes.utils_fit.tanh_through_2_points(x0, y0, x=None)[source]

Find tanh passing through two points x0=[x_0,x_1] and y0=[y_0,y_1]

y=a1*tanh((a2-x)/a3)+c

Parameters
  • x0 – iterable of two values (x coords)

  • y0 – iterable of two values (y coords)

  • x – array of coordinates where to evaluate tanh. If None function will return fit parameters

Returns

if x is None then return tanh coefficients (a1,a2,a3,c), otherwise returns y(x) points

omfit_classes.utils_fit.toq_profiles(psin, width, core, ped, edge, expin, expout)[source]

This is a direct fortran->python translation from the TOQ source

Parameters
  • psin – psi grid

  • width – width of tanh

  • core – core value

  • ped – pedestal value

  • edge – separatrix value

  • expin – inner exponent (EPED1: 1.1 for n_e, and 1.2 for T_e)

  • expout – outner exponent (EPED1: 1.1 for n_e, and 1.4 for T_e)

Returns

profile

omfit_classes.utils_fit.mtanh_polyexp(x, params)[source]

given lmfit Parameters object, generate mtanh_poly

omfit_classes.utils_fit.mtanh_polyexp_res(params, x_data, y_data, y_err)[source]
class omfit_classes.utils_fit.GASpline(xdata, ydata, ydataerr, xufit, y0=None, y0p=0.0, y1=None, y1p=None, sol=True, solc=False, numknot=3, autoknot=True, knotloc=None, knotbias=0, sepval_min=None, sepval_max=None, scrapewidth_min=None, scrapewidth_max=None, method='leastsq', maxiter=2000, monte=0, verbose=False, doPlot=False)[source]

Bases: object

Python based replacement for GAprofiles IDL spline routine “spl_mod” * Code accepts irregularly spaced (x,y,e) data and returns fit on regularly spaced grid * Numerical spline procedure based on Numerical Recipes Sec. 3.3 equations 3.3.2, 3.3.4, 3.3.5, 3.3.7 * Auto-knotting uses LMFIT minimization with chosen scheme * Boundary conditions enforced with matrix elements

The logic of this implementation is as follows: * The defualt is to auto-knot, which uses least-squares minimization to choose the knot locations. * If auto-knotting, then there are options of the guess of the knot locations or option to bias the knots. Else manual knots are used and LMFIT is not called * If the knot guess is present, then it is used else there are two options. Else we use the knot guess. * If the knot bias is None or >-1 then the knot guess is uniformly distributed using linspace. Else we use linspace with a knot bias.

For the edge data, the logic is as follows: * We can have auto/manual knots, free/fixed boundary value, and fit/ignore edge data. * When we fit edge data, then that edge data places a constraint on boundary value.

When monte-carlo is used, the return value is an unumpy uncertainties array that contains the mean and standard-deviation of the monte-carlo trials.

>>> x = np.linspace(0,1,21)
>>> y = np.sin(2*np.pi*x)
>>> e = np.repeat(0.1,len(x))
>>> xo = np.linspace(0,1,101)
>>> fit_obj = GASpline(x,y,e,xo,numknot=4,doPlot=True,monte=20)
>>> uerrorbar(x,uarray(y,e))
>>> pyplot.plot(xo,nominal_values(fit_obj(xo)))
>>> uband(xo,fit_obj(xo))
design_matrix(xcore, knotloc, bcs)[source]

Design Matrix for cubic spline interpolation Numerical Recipes Sec. 3.3 :param xcore: rho values for data on [0,1] :param knotloc: knot locations on [0,1] :param bcs: Dictionary of boundary conditions :return: design matrix for cubic interpolating spline

get_knotvals(xcore, ycore, wgt, d, geeinvc, fx, b, knotloc, bcs)[source]

Get the spline y-values at the knot locations that best fit the data :param xdata: x values of measured data :param ydata: values of measured data :param wgt: weight of measured data :param knotloc: location of spline knots [0, …, 1] :param bcs: dictionary of boundary conditions :param d, geeinvc, fx, b: Return values from design_matrix :return: values of the cubic interpolating spline at knot locations that best match the data

get_spl(x, y, w, bcs, k)[source]
Parameters
  • xdata – rho values

  • ydata – data values

  • wgt – data weight (1/uncertainty)

  • knotloc – location of knots

  • bcs – boundary conditions

Returns

spline fit on rho values

do_fit()[source]
plot()[source]
class omfit_classes.utils_fit.MtanhPolyExpFit(x_data, y_data, y_err, pow_core=2, method='leastsq', verbose=False, onAxis_gradzero=False, onAxis_value=None, fitEdge=True, edge_value=None, maxiter=None, blend_width_min=0.001, edge_width_min=0.001, edge_width_max=None, sym_guess=0.975, sym_min=0.9, sym_max=1.0, pos_edge_exp=False)[source]

Bases: object

Generalized fitter derived from B. Grierson tools fits core with pow_core polynomial C(x), and edge with offset exponential of form E(x) = offset + A*np.exp(-(x - xsym)/edge_width) blends functions together about x=x_sym with tanh-like behavior y_fit = (C(x)*np.exp(z) + E(x)*np.exp(-z))/(np.exp(z) + np.exp(-z)) where z = (xsym - x)/blend_width

Parameters
  • method – minimization method to use

  • verbose – turns on details of set flags

  • onAxis_gradzero – turn on to force C’(0) = 0 (effectively y’(0) for xsym/blend_width >> 1)

  • onAxis_value – set to force y(0) = onAxis_value

  • fitEdge – set = False to require E(x) = offset, A=edge_width=0

  • edge_value – set to force y(x=1) = edge_value

  • maxiter – controls maximum # of iterations

  • blend_width_min – minimum value for the core edge blending

  • edge_width_min – minimum value for the edge

  • sym_guess – guess for the x location of the pedestal symmetry point

  • sym_min – constraint for minimum x location for symmetry point

  • sym_max – constraint for maximum x location for symmetry point

Pos_edge_exp

force exponential to be positively valued so that exponential will have a negative slope in the SOL

Methods

__call__(x)

Evaluate mtanh_polyexp at x, propagating correlated uncertainties in the fit parameters.

class omfit_classes.utils_fit.UncertainRBF(x, d, e, centers=None, function='multiquadric', epsilon=None, norm=None)[source]

Bases: object

A class for radial basis function fitting of n-dimensional uncertain scattered data

Parameters:

Parameters
  • *args – arrays x, y, z, …, d, e where x, y, z, … are the coordinates of the nodes d is the array of values at the nodes, and e is the standard deviation error of the values at the nodes

  • centers – None the RBFs are centered on the input data points (can be very expensive for large number of nodes points) -N: N nodes randomly distributed in the domain N: N*N nodes uniformly distributed in the domain np.array(N,X): user-defined array with X coordinates of the N nodes

  • epsilon – float Adjustable constant for gaussian - defaults to approximate average distance between nodes

  • function – ‘multiquadric’: np.sqrt((r / self.epsilon) ** 2 + 1) #<— default ‘inverse’: 1.0 / np.sqrt((r / self.epsilon) ** 2 + 1) ‘gaussian’: np.exp(-(r**2 / self.epsilon)) ‘linear’: r ‘cubic’: r ** 3 ‘quintic’: r ** 5 ‘thin_plate’: r ** 2 * np.log(r)

  • norm – default “distance” is the euclidean norm (2-norm)

Examples

>>> x=np.linspace(0,1,21)
>>> y=np.linspace(0,1,21)
>>> e=x*0+.5
>>> e[abs(x-0.5)<0.1]=8
>>> y[abs(x-0.5)<0.1]=10
>>>
>>> x1 = np.linspace(0,1,100)
>>> y1 = UncertainRBF(x, y, e, centers=None, epsilon=1)(x1)
>>> y0 = UncertainRBF(x, y, e*0+1, centers=None, epsilon=1)(x1)
>>>
>>> pyplot.subplot(2,2,1)
>>> errorbar(x,y,e,ls='',marker='.',label='raw 1D data')
>>> uband(x1,y1,label='1D RBF w/ uncertainty')
>>> pyplot.plot(x1,nominal_values(y0),label='1D RBF w/o uncertainty')
>>> pyplot.title('1D')
>>> legend(loc=0)
>>>
>>> x = np.random.rand(1000)*4.0-2.0
>>> y = np.random.rand(1000)*4.0-2.0
>>> e = np.random.rand(1000)
>>> z = x*np.exp(-x**2-y**2)
>>> ti = np.linspace(-2.0, 2.0, 100)
>>> XI, YI = np.meshgrid(ti, ti)
>>>
>>> rbf = UncertainRBF(x, y, z+e, abs(e), centers=5, epsilon=1)
>>> ZI = nominal_values(rbf(XI, YI))
>>>
>>> rbf = UncertainRBF(x, y, z+e, abs(e)*0+1, centers=5, epsilon=1)
>>> ZC = nominal_values(rbf(XI, YI))
>>>
>>> pyplot.subplot(2,2,3)
>>> pyplot.scatter(x, y, c=z, s=100, edgecolor='none')
>>> pyplot.xlim(-2, 2)
>>> pyplot.ylim(-2, 2)
>>> pyplot.colorbar()
>>> pyplot.title('raw 2D data (w/o noise)')
>>>
>>> pyplot.subplot(2,2,2)
>>> pyplot.pcolor(XI, YI, ZI)
>>> pyplot.xlim(-2, 2)
>>> pyplot.ylim(-2, 2)
>>> pyplot.colorbar()
>>> pyplot.title('2D RBF w/ uncertainty')
>>>
>>> pyplot.subplot(2,2,4)
>>> pyplot.pcolor(XI, YI, ZC)
>>> pyplot.xlim(-2, 2)
>>> pyplot.ylim(-2, 2)
>>> pyplot.colorbar()
>>> pyplot.title('2D RBF w/o uncertainty')
omfit_classes.utils_fit.bimodal_gaussian(xx, center0, center1, sigma0, sigma1, amp0, amp1, delta0=None, delta1=None, debug=False)[source]

Calculates bimodal gaussian function. There are two gaussians added together.

Parameters
  • xx – Independent variable

  • center0 – Center of first gaussian before log transform

  • center1 – Center of second gaussian before log transform

  • sigma0 – Sigma of first gaussian before log transform

  • sigma1 – Sigma of second gaussian before log transform

  • amp0 – Amplitude of first gaussian

  • amp1 – Amplitude of second gaussian

  • delta0 – The fitter uses this variable to help it set up limits internally. The deltas are not actually used in the model.

  • delta1 – Not used in the model; here to help the fitter.

  • debug – T/F: print debugging stuff (keep this off during fit, but maybe on for testing)

Returns

Model function y(x)

class omfit_classes.utils_fit.BimodalGaussianFit(x=None, pdf=None, guess={}, middle=None, spammy=False, limits=None)[source]

Bases: object

The fit model is a sum of two Gaussians. If a middle value is provided, limits will be imposed to try to keep the Gaussians separated.

Initialize variables and call functions

Parameters
  • x – Independent variable

  • pdf – Probability distribution function

  • guess – Guess for parameters. Use {} or leave as default for auto guess.

  • middle – X value of a dividing line that is known to be between two separate Gaussians. Sets up additional limits on centers. No effect if None.

  • spammy – Many debugging print statements

  • limits – None for default limits or dictionary with PARAM_LIM keys where PARAM is center0, center1, etc. and LIM is min or max

default_guess = {}
make_guess()[source]

Given a dictionary with some guesses (can be incomplete or even empty), fills in any missing values with defaults, makes consistent deltas, and then defines a parameter set.

Produces a parameter instance suitable for input to lmfit .fit() method and stores it as self.guess.

bimodal_gaussian_fit()[source]

Fits a probability distribution function (like a histogram output, maybe) with a two gaussian function :return: Minimizer result

utils_base

omfit_classes.utils_base.open(file, mode='r', buffering=- 1, encoding=None, errors=None, newline=None, closefd=True, opener=None)[source]

Open file and return a stream. Raise OSError upon failure.

file is either a text or byte string giving the name (and the path if the file isn’t in the current working directory) of the file to be opened or an integer file descriptor of the file to be wrapped. (If a file descriptor is given, it is closed when the returned I/O object is closed, unless closefd is set to False.)

mode is an optional string that specifies the mode in which the file is opened. It defaults to ‘r’ which means open for reading in text mode. Other common values are ‘w’ for writing (truncating the file if it already exists), ‘x’ for creating and writing to a new file, and ‘a’ for appending (which on some Unix systems, means that all writes append to the end of the file regardless of the current seek position). In text mode, if encoding is not specified the encoding used is platform dependent: locale.getpreferredencoding(False) is called to get the current locale encoding. (For reading and writing raw bytes use binary mode and leave encoding unspecified.) The available modes are:

Character

Meaning

‘r’

open for reading (default)

‘w’

open for writing, truncating the file first

‘x’

create a new file and open it for writing

‘a’

open for writing, appending to the end of the file if it exists

‘b’

binary mode

‘t’

text mode (default)

‘+’

open a disk file for updating (reading and writing)

‘U’

universal newline mode (deprecated)

The default mode is ‘rt’ (open for reading text). For binary random access, the mode ‘w+b’ opens and truncates the file to 0 bytes, while ‘r+b’ opens the file without truncation. The ‘x’ mode implies ‘w’ and raises an FileExistsError if the file already exists.

Python distinguishes between files opened in binary and text modes, even when the underlying operating system doesn’t. Files opened in binary mode (appending ‘b’ to the mode argument) return contents as bytes objects without any decoding. In text mode (the default, or when ‘t’ is appended to the mode argument), the contents of the file are returned as strings, the bytes having been first decoded using a platform-dependent encoding or using the specified encoding if given.

‘U’ mode is deprecated and will raise an exception in future versions of Python. It has no effect in Python 3. Use newline to control universal newlines mode.

buffering is an optional integer used to set the buffering policy. Pass 0 to switch buffering off (only allowed in binary mode), 1 to select line buffering (only usable in text mode), and an integer > 1 to indicate the size of a fixed-size chunk buffer. When no buffering argument is given, the default buffering policy works as follows:

  • Binary files are buffered in fixed-size chunks; the size of the buffer is chosen using a heuristic trying to determine the underlying device’s “block size” and falling back on io.DEFAULT_BUFFER_SIZE. On many systems, the buffer will typically be 4096 or 8192 bytes long.

  • “Interactive” text files (files for which isatty() returns True) use line buffering. Other text files use the policy described above for binary files.

encoding is the name of the encoding used to decode or encode the file. This should only be used in text mode. The default encoding is platform dependent, but any encoding supported by Python can be passed. See the codecs module for the list of supported encodings.

errors is an optional string that specifies how encoding errors are to be handled—this argument should not be used in binary mode. Pass ‘strict’ to raise a ValueError exception if there is an encoding error (the default of None has the same effect), or pass ‘ignore’ to ignore errors. (Note that ignoring encoding errors can lead to data loss.) See the documentation for codecs.register or run ‘help(codecs.Codec)’ for a list of the permitted encoding error strings.

newline controls how universal newlines works (it only applies to text mode). It can be None, ‘’, ‘n’, ‘r’, and ‘rn’. It works as follows:

  • On input, if newline is None, universal newlines mode is enabled. Lines in the input can end in ‘n’, ‘r’, or ‘rn’, and these are translated into ‘n’ before being returned to the caller. If it is ‘’, universal newline mode is enabled, but line endings are returned to the caller untranslated. If it has any of the other legal values, input lines are only terminated by the given string, and the line ending is returned to the caller untranslated.

  • On output, if newline is None, any ‘n’ characters written are translated to the system default line separator, os.linesep. If newline is ‘’ or ‘n’, no translation takes place. If newline is any of the other legal values, any ‘n’ characters written are translated to the given string.

If closefd is False, the underlying file descriptor will be kept open when the file is closed. This does not work when a file name is given and must be True in that case.

A custom opener can be used by passing a callable as opener. The underlying file descriptor for the file object is then obtained by calling opener with (file, flags). opener must return an open file descriptor (passing os.open as opener results in functionality similar to passing None).

open() returns a file object whose type depends on the mode, and through which the standard file operations such as reading and writing are performed. When open() is used to open a file in a text mode (‘w’, ‘r’, ‘wt’, ‘rt’, etc.), it returns a TextIOWrapper. When used to open a file in a binary mode, the returned class varies: in read binary mode, it returns a BufferedReader; in write binary and append binary modes, it returns a BufferedWriter, and in read/write mode, it returns a BufferedRandom.

It is also possible to use a string or bytearray as a file for both reading and writing. For strings StringIO can be used like a file opened in a text mode, and for bytes a BytesIO can be used like a file opened in a binary mode.

omfit_classes.utils_base.deprecated(func)[source]

This is a decorator which can be used to mark functions as deprecated. It will result in a warning being emitted when the function is used.

omfit_classes.utils_base.b2s(obj)[source]
omfit_classes.utils_base.safe_eval_environment_variable(var, default)[source]

Safely evaluate environmental variable

Parameters
  • var – string with environmental variable to evaluate

  • default – default value for the environmental variable

omfit_classes.utils_base.warning_on_one_line(message, category, filename, lineno, file=None, line=None)[source]
omfit_classes.utils_base.hasattr_no_dynaLoad(object, attribute)[source]

same as hasattr function but does not trigger dynamic loading

omfit_classes.utils_base.isinstance_str(inv, cls)[source]

checks if an object is of a certain type by looking at the class name (not the class object) This is useful to circumvent the need to load import Python modules.

Parameters
  • inv – object of which to check the class

  • cls – string or list of string with the name of the class(es) to be checked

Returns

True/False

omfit_classes.utils_base.evalExpr(inv)[source]

Return the object that dynamic expressions return when evaluated This allows OMFITexpression(‘None’) is None to work as one would expect. Epxressions that are invalid they will raise an OMFITexception when evaluated

Parameters

inv – input object

Returns

  • If inv was a dynamic expression, returns the object that dynamic expressions return when evaluated

  • Else returns the input object

omfit_classes.utils_base.freezeExpr(me, remove_OMFITexpressionError=False)[source]

Traverse a dictionary and evaluate OMFIT dynamic expressions in it NOTE: This function operates in place

Parameters
  • me – input dictionary

  • remove_OMFITexpressionError – remove entries that evaluate as OMFITexpressionError

Returns

updated dictionary

omfit_classes.utils_base.is_none(inv)[source]

This is a convenience function to evaluate if a object or an expression is None Use of this function is preferred over testing if an expression is None by using the == function. This is because np arrays evaluate == on a per item base

Parameters

inv – input object

Returns

True/False

omfit_classes.utils_base.is_bool(value)[source]

Convenience function check if value is boolean

Parameters

value – value to check

Returns

True/False

omfit_classes.utils_base.is_int(value)[source]

Convenience function check if value is integer

Parameters

value – value to check

Returns

True/False

omfit_classes.utils_base.is_float(value)[source]

Convenience function check if value is float

Parameters

value – value to check

Returns

True/False

omfit_classes.utils_base.is_numeric(value)[source]

Convenience function check if value is numeric

Parameters

value – value to check

Returns

True/False

omfit_classes.utils_base.is_number_string(my_string)[source]

Determines whether a string may be parsed as a number :param my_string: string :return: bool

omfit_classes.utils_base.is_alphanumeric(value)[source]

Convenience function check if value is alphanumeric

Parameters

value – value to check

Returns

True/False

omfit_classes.utils_base.is_array(value)[source]

Convenience function check if value is list/tuple/array

Parameters

value – value to check

Returns

True/False

omfit_classes.utils_base.is_string(value)[source]

Convenience function check if value is string

Parameters

value – value to check

Returns

True/False

omfit_classes.utils_base.is_email(value)[source]
omfit_classes.utils_base.is_int_array(val)[source]

Convenience function check if value is a list/tuple/array of integers

Parameters

value – value to check

Returns

True/False

class omfit_classes.utils_base.qRedirector(tag='STDOUT')[source]

Bases: object

A class for redirecting stdout and stderr to this Text widget

write(string)[source]
flush()[source]
class omfit_classes.utils_base.console_color[source]

Bases: object

static BLACK(x='')[source]
static RED(x='')[source]
static GREEN(x='')[source]
static YELLOW(x='')[source]
static BLUE(x='')[source]
static MAGENTA(x='')[source]
static CYAN(x='')[source]
static WHITE(x='')[source]
static UNDERLINE(x='')[source]
static RESET(x='')[source]
omfit_classes.utils_base.tag_print(*objects, **kw)[source]

Works like the print function, but used to print to GUI (if GUI is available). The coloring of the GUI print is determined by the tag parameter.

Parameters
  • *objects – string/objects to be printed

  • sep – separator (default: ‘ ‘)

  • sep – new line character (default: ‘\n’)

  • tag

    one of the following:

    • ’STDOUT’

    • ’STDERR’

    • ’DEBUG’

    • ’PROGRAM_OUT’

    • ’PROGRAM_ERR’

    • ’INFO’

    • ’WARNING’

    • ’HIST’

    • ’HELP’

omfit_classes.utils_base.printi(*objects, **kw)[source]

Function to print with INFO style

Parameters
  • *objects – what to print

  • **kw – keywords passed to the print function

Returns

return from print function

omfit_classes.utils_base.pprinti(*objects, **kw)[source]

Function to pretty-print with INFO style

Parameters
  • *objects – what to print

  • **kw – keywords passed to the print function

Returns

return from pprint function

omfit_classes.utils_base.printe(*objects, **kw)[source]

Function to print with ERROR style

Parameters
  • *objects – what to print

  • **kw – keywords passed to the print function

Returns

return from print function

omfit_classes.utils_base.pprinte(*objects, **kw)[source]

Function to pretty-print with STDERR style

Parameters
  • *objects – what to print

  • **kw – keywords passed to the pprint function

Returns

return from pprint function

omfit_classes.utils_base.printw(*objects, **kw)[source]

Function to print with WARNING style

Parameters
  • *objects – what to print

  • **kw – keywords passed to the print function

Returns

return from print function

omfit_classes.utils_base.pprintw(*objects, **kw)[source]

Function to pretty-print with WARNING style

Parameters
  • *objects – what to print

  • **kw – keywords passed to the pprint function

Returns

return from pprint function

omfit_classes.utils_base.printd(*objects, **kw)[source]

Function to print with DEBUG style. Printing is done based on environmental variable OMFIT_DEBUG which can either be a string with an integer (to indicating a debug level) or a string with a debug topic as defined in OMFITaux[‘debug_logs’]

Parameters
  • *objects – what to print

  • level – minimum value of debug for which printing will occur

  • **kw – keywords passed to the print function

Returns

return from print function

omfit_classes.utils_base.printt(*objects, **kw)[source]

Function to force print to terminal instead of GUI

Parameters
  • *objects – what to print

  • err – print to standard error

  • **kw – keywords passed to the print function

Returns

return from print function

class omfit_classes.utils_base.quiet_environment[source]

Bases: object

This environment quiets all output (stdout and stderr)

>> print(‘hello A’) >> with quiet_environment() as f: >> print(‘hello B’) >> print(‘hello C’) >> tmp=f.stdout >> print(‘hello D’) >> print(tmp)

property stdout
property stderr
omfit_classes.utils_base.size_of_dir(folder)[source]

function returns the folder size as a number

Parameters

folder – directory path

Returns

size in bytes

omfit_classes.utils_base.sizeof_fmt(filename, separator='', format=None, unit=None)[source]

function returns a string with nicely formatted filesize

Parameters
  • filename – string with path to the file or integer representing size in bytes

  • separator – string between file size and units

  • format – default None, format for the number

  • unit – default None, unit of measure

Returns

string with file size

omfit_classes.utils_base.encode_ascii_ignore(string)[source]

This function provides fail-proof conversion of str to ascii

Note: not ASCII characters are ignored

Parameters

string – str string

Returns

ascii scring

omfit_classes.utils_base.is_binary_file(filename)[source]

Detect if a file is binary or ASCII

Parameters

filename – path to the file

Returns

True if binary file else False

omfit_classes.utils_base.wrap(text, width)[source]

A word-wrap function that preserves existing line breaks and most spaces in the text. Expects that existing line breaks are posix newlines (n).

Parameters
  • text – text to be wrapped

  • width – maximum line width

omfit_classes.utils_base.ascii_progress_bar(n, a=0, b=100, mess='', newline=False, clean=False, width=20, fill='#', void='-', style=' [{sfill}{svoid}] {perc:3.2f}% {mess}', tag='INFO', quiet=None)[source]

Displays an ASCII progress bar

Parameters
  • n – current value OR iterable

  • a – default 0, start value (ignored if n is an iterable)

  • b – default 100, end value (ignored if n is an iterable)

  • mess – default blank, message to be displayed

  • newline – default False, use newlines rather than carriage returns

  • clean – default False, clean out progress bar when end is reached

  • width – default 20, width in characters of the progress bar

  • fill – default ‘#’, filled progress bar character

  • void – default ‘-‘, empty progress bar character

  • style – default ‘ [{sfill}{svoid}] {perc:3.2f}% {mess}’ full format string

  • tag – default ‘HELP’, see tag_print()

  • quiet – do not print; default is quiet=None; if quiet is None, attempt to pick up value from bool(eval(os.environ[‘OMFIT_PROGRESS_BAR_QUIET’]))

Example::
for n in ascii_progress_bar(np.linspace(12.34, 56.78, 4), mess=lambda x:f’{x:3.3}’):

OMFITx.Refresh() # will slow things down

omfit_classes.utils_base.load_dynamic(module, path)[source]

Load and initialize a module implemented as a dynamically loadable shared library and return its module object. If the module was already initialized, it will be initialized again. Re-initialization involves copying the __dict__ attribute of the cached instance of the module over the value used in the module cached in sys.modules. Note: using shared libraries is highly system dependent, and not all systems support it.

Parameters
  • name – name used to construct the name of the initialization function: an external C function called initname() in the shared library is called

  • pathname – path to the shared library.

omfit_classes.utils_base.is_running(process)[source]

This function retuns True or False depending on whether a process is running or not

This relies on grep of the ps axw command

Parameters

process – string with process name or process ID

Returns

False if process is not running, otherwise line of ps command

omfit_classes.utils_base.kill_subprocesses(process=None)[source]

kill all of the sub-processes of a given process

Parameters

process – process of which sub-processes will be killed

omfit_classes.utils_base.memuse(as_bytes=False, **kw)[source]

return memory usage by current process

Parameters
  • as_bytes – return memory as number of bytes

  • **kw – keywords to be passed to sizeof_fmt()

Returns

formatted string with usage expressed in kB, MB, GB, TB

omfit_classes.utils_base.system_executable(executable, return_details=False, force_path=None)[source]

function that returns the full path of the executable

Parameters
  • executable – executable to return the full path of

  • return_details – return additional info for some commands (e.g. rsync)

Returns

string with full path of the executable

omfit_classes.utils_base.python_environment()[source]

returns string with module names that have __version__ attribute (similar to what pip freeze would do)

omfit_classes.utils_base.is_institution(instA, instB)[source]
omfit_classes.utils_base.splitDomain(namesIn)[source]
omfit_classes.utils_base.ping(host, timeout=2)[source]

Function for pinging remote server

Parameters
  • host – host name to be pinged

  • timeout – timeout time

Returns

boolean indicating if ping was successful

omfit_classes.utils_base.parse_server(server, default_username='fusionbot')[source]

parses strings in the form of username:password@server.somewhere.com:port

Parameters
  • server – input string

  • default_username – what username to return if username@server is not specified

Returns

tuple of four strings with user,password,server,port

omfit_classes.utils_base.assemble_server(username='', password='', server='', port='')[source]

returns assembled server string in the form username:password@server.somewhere.com:port

Parameters
  • username – username

  • password – password

  • server – server

  • port – port

Returns

assembled server string

omfit_classes.utils_base.test_connection(username, server, port, timeout=1.0, ntries=1, ssh_path=None)[source]

Function to test if connection is available on a given host and port number

Parameters
  • server – server to connect to

  • port – TCP port to connect to

  • timeout – wait for timeout seconds for connection to become available

  • ntries – number of retries

  • ntries==1 means: check if connection is up

  • ntries>1 means: wait for connection to come up

Returns

boolean indicating if connection is available

omfit_classes.utils_base.sshOptions(sshName='OMFITssh', ssh_path=None)[source]
omfit_classes.utils_base.ssh_connect_with_password(cmd, credential, test_connected, timeout=10, check_every=0.1)[source]
omfit_classes.utils_base.controlmaster(username, server, port, credential, check=False, ssh_path=None)[source]

Setup controlmaster ssh socket

Parameters
  • username – username of ssh connection

  • server – host of ssh connection

  • port – port of ssh connection

  • credential – credential file to use for connection

  • check – check that control-master exists and is open

Returns

[True/False] if check=True else string to be included to ssh command to use controlmaster

omfit_classes.utils_base.setup_ssh_tunnel(server, tunnel, forceTunnel=False, forceRemote=False, allowEmptyServerUsername=False, ssh_path=None)[source]

This function sets up the remote ssh tunnel (if necessary) to connect to the server and returns the username,server,port triplet onto which to connect.

Parameters
  • server – string with remote server

  • tunnel – string with via tunnel (multi-hop tunneling with comma separated list of hops)

  • forceTunnel – force tunneling even if server is directly reachable (this is useful when servers check provenance of an IP)

  • forceRemote – force remote connection even if server is localhost

  • allowEmptyServerUsername – allow empty server username

Returns

username, server, port

omfit_classes.utils_base.setup_socks(tunnel, ssh_path=None)[source]

Specifies a local “dynamic” application-level port forwarding. Whenever a connection is made to a defined port on the local side, the connection is forwarded over the secure channel, and the application protocol is then used to determine where to connect to from the remote machine. The SOCKS4 and SOCKS5 protocols are supported, and ssh will act as a SOCKS server.

Parameters

tunnel – tunnel

Returns

local sock connection (username, localhost, port)

omfit_classes.utils_base.internet_on(website='http://www.bing.com', timeout=5)[source]

Check internet connection by returning the ability to connect to a given website

Parameters
  • website – website to test (by default ‘http://www.bing.com’ since google is not accessible from China)

  • timeout – timeout in seconds

Returns

ability to connect or not

omfit_classes.utils_base.get_ip()[source]

https://stackoverflow.com/a/28950776/6605826

Returns

current system IP address

class omfit_classes.utils_base.AskPass(credential, force_ask=False, OTP=False, **kw)[source]

Bases: object

Class that asks for password and one-time-password secret if credential file does not exists

Parameters
  • credential – credential filename to look for under the OMFITsettingsDir+os.sep+’credentials’ directory

  • force_ask – force asking for the password (if file exists pwd is shown for amendment)

  • OTP – ask for one-time-password secret; if ‘raw’ do not pass through pyotp.now()

  • **kw – extra arguments used in self.ask()

explain = 'NOTE: Your saved credentials will be encrypted\n with your ~/.ssh/id_rsa private key\nand stored under /home/fusionbot/.OMFIT/credentials'
ask(pwd, otp, OTP, store, **kw)[source]

Ask user for the password and one-time-password secret

Parameters
  • pwd – password

  • otp – one time password

  • OTP – ask for one-time-password secret; if ‘raw’ do not pass through pyotp.now()

  • store – save pwd + otp to credential file

destroy()[source]
encrypt(pwd, otp)[source]

save user password and one-time-password secret to credential file

decrypt()[source]
Returns

password with addition of six digits one-time-password

omfit_classes.utils_base.encrypt(in_string, keys=None)[source]

Encrypts a string with user RSA private key

Parameters
  • in_string – input string to be encrypted with RSA private key

  • keys – private keys to encrypt with. if None, default to RSA key.

Returns

encrypted string

omfit_classes.utils_base.decrypt(in_string)[source]

Decrypts a string with user RSA private key

Parameters

in_string – input string to be decrypted with RSA private key

Returns

decrypted string

omfit_classes.utils_base.credential_filename(server)[source]

generates credential filename given username@server:port

Parameters

serverusername@server:port

Returns

credential filename

omfit_classes.utils_base.encrypt_credential(credential, password, otp, keys=None)[source]

generate encrypted credential file

Parameters
  • credential – credential string

  • password – user password

  • otp – user one-time-password secret

  • keys – private keys to encrypt with. if None, default to RSA key.

omfit_classes.utils_base.decrypt_credential(credential)[source]

read and decrypt credential file

Parameters

credential – credential string

Returns

password and one-time-password secret

omfit_classes.utils_base.reset_credential(credential)[source]

delete credential file

Parameters

credential – credential string

omfit_classes.utils_base.password_encrypt(data, password, encoding='utf-8', bufferSize=65536)[source]

Encrypt using AES cryptography

Parameters
  • data – data in clear to be encrypted

  • password – password

  • encoding – encoding to be used to translate from string to bytes

  • bufferSize – buffer size

Returns

encrypted data (bytes)

omfit_classes.utils_base.password_decrypt(data, password, encoding='utf-8', bufferSize=65536)[source]

Deccrypt using AES cryptography

Parameters
  • data – encrypted data (bytes) to be decrypted

  • password – password

  • encoding – encoding to be used to translate from bytes to string

  • bufferSize – buffer size

Returns

data in clear

omfit_classes.utils_base.PDF_add_file(pdf, file, name=None, delete_file=False)[source]

Embed file in PDF

Parameters
  • pdf – PDF filename

  • file – filename or file extension

  • name – name of attachment in PDF. Uses file filename if is None.

  • delete_file – remove file after embedding

Returns

full path to PDF file

omfit_classes.utils_base.PDF_get_file(pdf, file, name='.*')[source]

Extract file from PDF

Parameters
  • pdf – PDF filename

  • file – filename or file extension

  • name – regular expression with name(s) of attachment in PDF to get

Returns

full path to file

omfit_classes.utils_base.PDF_set_DMP(pdf, dmp='.h5', delete_dmp=False)[source]

Embed DMP file in PDF

Parameters
  • pdf – PDF filename

  • dmp – DMP filename or extension

  • delete_dmp – remove DMP file after embedding

Returns

full path to PDF file

omfit_classes.utils_base.PDF_get_DMP(pdf, dmp='.h5')[source]

Extract DMP file from PDF

Parameters
  • pdf – PDF filename

  • dmp – filename or file extension

Returns

full path to DMP file

omfit_classes.utils_base.python_imports(namespace, submodules=True, onlyWithVersion=False)[source]

function that lists the Python modules that have been imported in a namespace

Parameters
  • namespace – usually this function should be called with namespace=globals()

  • submodules – list only top level modules or also the sub-modules

  • onlyWithVersion – list only (sub-)modules with __version__ attribute

omfit_classes.utils_base.function_arguments(f, discard=None, asString=False)[source]

Returns the arguments that a function takes

Parameters
  • f – function to inspect

  • discard – list of function arguments to discard

  • asString – concatenate arguments to a string

Returns

tuple of four elements

  • list of compulsory function arguments

  • dictionary of function arguments that have defaults

  • True/False if the function allows variable arguments

  • True/False if the function allows keywords

omfit_classes.utils_base.args_as_kw(f, args, kw)[source]

Move positional arguments to kw arguments

Parameters
  • f – function

  • args – positional arguments

  • kw – keywords arguments

Returns

tuple with positional arguments moved to keyword arguments

omfit_classes.utils_base.only_valid_kw(f, kw0=None, **kw1)[source]

Function used to return only entries of a dictionary that would be accepted by a function and avoid TypeError: … got an unexpected keyword argument …

Parameters
  • f – function

  • kw0 – dictionary with potential function arguments

  • **kw1 – keyword dictionary with potential function arguments

>> f(**only_valid_kw(f, kw))

omfit_classes.utils_base.functions_classes_in_script(filename)[source]

Parses a Python script and returns list of functions and classes that are declared there (does not execute the script)

Params filename

filename of the Python script to parse

Returns

dictionary with lists of ‘functions’ and ‘classes’

omfit_classes.utils_base.dump_function_usage(post_operator=None, pre_operator=None)[source]

Decorator function used to collect arguments passed to a function

>> def printer(d): >> print(d)

>> @dump_function_usage(printer) >> def hello(a=1): >> print(‘hello’)

Parameters
  • post_operator – function called after the decorated function (a dictionary with the function name, arguments, and keyword arguments gets passed)

  • pre_operator – function called before the decorated function (a dictionary with the function name, arguments, and keyword arguments gets passed)

omfit_classes.utils_base.function_to_tree(funct, self_ref)[source]

Converts a function to an OMFITpythonTask instance that can be saved in the tree

Parameters
  • funct – function The function you want to export

  • self_ref – object Reference to the object that would be called self within the script. Its location in the tree will be looked up and used to replace ‘self’ in the code. This is used to add a line defining the variable self within the new OMFITpythonTask’s source. If the function doesn’t use self, then it just has to be something that won’t throw an exception, since it won’t be used (e.g. self_ref=OMFIT should work if you’re not using self)

Returns

An OMFITpythonTask instance

omfit_classes.utils_base.tolist(data, empty_lists=None)[source]

makes sure that the returned item is in the format of a list

Parameters
  • data – input data

  • empty_lists – list of values that if found will be filtered out from the returned list

Returns

list format of the input data

omfit_classes.utils_base.common_in_list(input_list)[source]

Finds which list element is most common (most useful for a list of strings or mixed strings & numbers) :param input_list: list with hashable elements (no nested lists) :return: The list element with the most occurrences. In a tie, one of the winning elements is picked arbitrarily.

omfit_classes.utils_base.keyword_arguments(dictionary)[source]

Returns the string that can be used to generate a dictionary from keyword arguments

eg. keyword_arguments({‘a’:1,’b’:’hello’}) –> ‘a’=1,’b’=’hello’

Parameters

dictionary – input dictionary

Returns

keyword arguments string

omfit_classes.utils_base.select_dicts_dict(dictionary, **selection)[source]

Select keys from a dictionary of dictionaries. This is useful to select data from a dictionary that uses a hash as the key for it’s children dictionaries, and the hash is based on the content of the children.

eg: >> parent={} >> parent[‘child1’]={‘a’:1,’b’:1} >> parent[‘child2’]={‘a’:1,’b’:2} >> select_dicts_dict(parent, b=1) #returns: [‘child1’] >> select_dicts_dict(parent, a=1) #returns: [‘child1’, ‘child2’]

Parameters
  • dictionary – parent dictionary

  • **selection – keywords to select on

Returns

list of children whose content matches the selection

omfit_classes.utils_base.bestChoice(options, query, returnScores=False)[source]

This fuction returns the best heuristic choice from a list of options

Parameters
  • options – dictionary or list of strings

  • query – string to look for

  • returnScores – whether to return the similarity scores for each of the options

Returns

the tuple with best choice from the options provided and its matching score, or match scores if returnScores=True

omfit_classes.utils_base.flip_values_and_keys(dictionary, modify_original=False, add_key_to_value_first=False)[source]

Flips values and keys of a dictionary People sometime search the help for swap_keys, switch_keys, or flip_keys to find this function.

Parameters
  • dictionary – dict input dictionary to be processed

  • modify_original – bool whether the original dictionary should be modified

  • add_key_to_value_first – bool Append the original key to the value (which will become the new key). The new dictionary will look like: {‘value (key)’: key}, where key and value were the original key and value. This will force the new key to be a string.

Returns

dict flipped dictionary

omfit_classes.utils_base.dir2dict(startpath, dir_dict=<class 'collections.OrderedDict'>)[source]

python dictionary hierarchy based on filesystem hierarchy

Parameters

startpath – filesystem path

Returns

python dictionary

omfit_classes.utils_base.parse_git_message(message, commit=None, tag_commits=[])[source]
class omfit_classes.utils_base.OMFITgit(git_dir, n=25)[source]

Bases: object

An OMFIT interface to a git repo, which does NOT leave dangling open files

Parameters
  • git_dir – the root of the git repo

  • n – number of visible commits

install_OMFIT_hooks(quiet=True)[source]
is_OMFIT_source()[source]
tag_hashes()[source]

Get the full hashes for all of the tags

Returns

A list of hashes, corresponding to self.tag.splitlines()

get_hash(ref)[source]

Get the full hash for the given reference (tag, branch, or commit)

Parameters

ref – A commitish reference (tag, commit hash, or branch) or iterable of references

get_commit_message(commit)[source]

Given the commit hash, return the commit message

Parameters

commit – (str) The hash of the commit

get_commit_date(commit)[source]

Given the commit hash, return the commit date (in form similar to time.time)

Parameters

commit – (str) The hash of the commit

Returns

An int

get_visible_commits(order='author-date-order')[source]

Get the commits that don’t have Hidden, Hide, or Minor comment tags

Returns

A tuple of the commits, their messages, their authors, their dates, and the tag_hashes

get_tag_version(tag_family)[source]

Get a version similar to the results of self.describe, but restricted to tags containing a specific string. This is for finding the tagged version of a module: one might use repo.get_tag_version('cake_') to get the version of the CAKE module (like ‘cake_00.01.684e4d226a’ or ‘cake_00.01’)

Parameters

tag_family – A substring defining a family of tags. It is assumed that splitting this substring out of the git tags will leave behind version numbers for that family.

Returns

A string with the most recent tag in the family, followed by a commit short hash if there have been commits since the tag was defined.

active_branch()[source]
Get the active branch, the (shortened) hash of HEAD, and the standard string containing those

where the standard string is “Commit <shortened hash> on branch <active_branch>”

Returns

active_branch, active_branch_hash, standard_string

get_module_params(path='modules', key='ID', quiet=True, modules=None)[source]

Return a dictionary whose keys are the modules available in this repo

Parameters
  • path – The path (relative to the root of the repo) where the modules are stored

  • key – attribute used for the returned dictorary (eg. ID (default) or path)

  • quiet – whether to print the modules loaded (if None uses progress bar)

  • modules – list of modules to track (full path, minus OMFITsave.txt)

Returns

A dictionary whose keys are the modules available in this repo, the values are dictionaries whose keys are author, date, commit, path, version.

A return example:

{
    'EFITtime':
    {
        'edited_by': 'Sterling Smith',

        'commit': '64a213fb03e14a154567f8eb7b260f10acbe48f3',

        'date': '1456297324', #time.time format

        'path': '/Users/meneghini/Coding/atom/OMFIT-source/modules/EFITtime/OMFITsave.txt',

        'version': u'Module to execute time-dependent EFIT equilibrium reconstructions'
    }
}
get_remotes()[source]

returns dictionary with remotes as keys and their properties

get_branches(remote='')[source]

returns dictionary with branches as keys and their properties

clone()[source]

Clone of the repository in the OMFITworking environment OMFITtmpDir+os.sep+’repos’ and maintain remotes information. Note: original_git_repository is the remote that points to the original repository that was cloned

Returns

OMFITgit object pointing to cloned repository

switch_branch(branch, remote='')[source]

Switch to branch

Parameters
  • branch – branch

  • remote – optional remote repository

branch_containing_commit(commit)[source]

Returns a list with the names of the branches that contain the specified commit

Parameters

commit – commit to search for

Returns

list of strings (remote is separated by /)

switch_branch_GUI(branch='', remote='', parent=None, title=None, only_existing_branches=False)[source]
remote_branches_details(remote='origin', skip=['unstable', 'master', 'daily_.*', 'gh-pages'])[source]

returns a list of strings with the details of the branches on a given remote This function is useful to identify stale branches.

Parameters
  • remote – remote repository

  • skip – branches to skip

Returns

list of strings

omfit_classes.utils_base.omfit_hash(string, length=- 1)[source]

Hash a string using SHA1 and truncate hash at given length Use this function instead of Python hash(string) since with Python 3 the seed used for hashing changes between Python sessions

Parameters
  • string – input string to be hashed

  • length – lenght of the hash (max 40)

Returns

SHA1 hash of the string in hexadecimal representation

omfit_classes.utils_base.omfit_numeric_hash(string, length=- 1)[source]

Hash a string using SHA1 and truncate integer at given length Use this function instead of Python hash(string) since with Python 3 the seed used for hashing changes between Python sessions

Parameters
  • string – input string to be hashed

  • length – lenght of the hash (max 47)

Returns

SHA1 hash of the string in integer representation

omfit_classes.utils_base.find_library(libname, default=None)[source]

This function returns the path of the matching library name

Parameters
  • libname – name of the library to look for (without lib and extension)

  • default – what to return if library is not found

Returns

omfit_classes.utils_base.find_file(reg_exp_filename, path)[source]

find all filenames matching regular expression under a path

Parameters
  • reg_exp_filename – regular expression for the file to match

  • path – folder where to look

Returns

list of filenames matching regular expression with full path

omfit_classes.utils_base.sleep(sleeptime)[source]

Non blocking sleep

Parameters

sleeptime – time to sleep in seconds

Returns

None

omfit_classes.utils_base.now(format_out='%d %b %Y  %H:%M', timezone=None)[source]
Parameters
  • format_outhttps://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior

  • timezone – [string] look at /usr/share/zoneinfo for available options CST6CDT Europe/? Hongkong MST Portugal WET Africa/? Canada/? Factory Iceland MST7MDT ROC America/? Chile/? GB Indian/? Mexico/? ROK Antarctica/? Cuba GB-Eire Iran NZ Singapore Arctic/? EET GMT Israel NZ-CHAT Turkey Asia/? EST GMT+0 Jamaica Navajo UCT Atlantic/? EST5EDT GMT-0 Japan PRC US/? Australia/? Egypt GMT0 Kwajalein PST8PDT UTC Brazil/? Eire Greenwich Libya Pacific/? Universal CET Etc/? HST MET Poland W-SU Zulu

Returns

formatted datetime string if format_out is None, return datetime object

omfit_classes.utils_base.convertDateFormat(date, format_in='%d/%m/%Y %H:%M', format_out='%d %b %Y  %H:%M')[source]
Parameters
  • date – string date or float timestamp

  • format_in – date format of the input (ignored if date is float timestamp)

  • format_out – date format of the wanted output

Returns

string date in new format

omfit_classes.utils_base.update_dir(root_src_dir, root_dst_dir)[source]

Go through the source directory, create any directories that do not already exist in destination directory, and move files from source to the destination directory Any pre-existing files will be removed first (via os.remove) before being replace by the corresponding source file. Any files or directories that already exist in the destination but not in the source will remain untouched

Parameters
  • root_src_dir – Source directory

  • root_dst_dir – Destination directory

omfit_classes.utils_base.permissions(path)[source]
Path

file path

Returns

file permissions as a (user, group, other) (read, write, execute) string, such as: rwxr-xr-x

omfit_classes.utils_base.zipfolder(foldername, filename, compression=0, allowZip64=True)[source]

compress folder to zip archive

Parameters
  • foldername – folder to compress

  • filename – zip filename to use

  • compression – compression algorythm

  • allowZip64 – use 64bit extension to handle files >4GB

omfit_classes.utils_base.omfit_patch(obj, fun)[source]

Patch a standard module/class with a new function/method. Moves original attribute to _original_<name> ONLY ONCE! If done blindly you will go recursive when reloading modules

omfit_classes.utils_base.funny_random_name_generator(use_mood=False, digits=2)[source]

Makes up a random name with no spaces in it. Funnier than timestamps.

Parameters
  • use_mood – bool Use a mood instead of a color

  • digits – int Number of digits in the random number (default: 2)

Returns

string The default format is [color]_[animal]_[container]_[two digit number] Example: “blueviolet_kangaroo_prison_26” Colors come from matplotlib’s list. Alternative formats selected by keywords: [mood]_[animal]_[container]_[two digit number] Example: “menacing_guppy_pen_85”

omfit_classes.utils_base.developer_mode(module_link=None, **kw)[source]

Calls the OMFITmodule.convert_to_developer_mode() method for every top-level module in OMFIT

Intended for convenient access in a command box script that can be called as soon as the project is reloaded.

Accepts keywords related to OMFITmodule.convert_to_developer_mode() and passes them to that function.

Parameters

module_link – OMFITmodule or OMFIT instance Refence to the module to be converted or to top-level OMFIT If None, defaults to OMFIT (affecting all top-level modules)

omfit_classes.utils_base.sanitize_version_number(version)[source]

Removes common non-numerical characters from version numbers obtained from git tags, such as ‘_rc’, etc.

omfit_classes.utils_base.compare_version(version1, version2)[source]

Compares two version numbers and determines which one, if any, is greater.

This function can handle wildcards (eg. 1.1.*) Most non-numeric characters are removed, but some are given special treatment. a, b, c represent alpha, beta, and candidate versions and are replaced by numbers -3, -2, -1.

So 4.0.1-a turns into 4.0.1.-3, 4.0.1-b turns into 4.0.1.-2, and then -3 < -2 so the beta will be recognized as newer than the alpha version.

rc# is recognized as a release candidate that is older than the version without the rc

So 4.0.1_rc1 turns into 4.0.1.-1.1 which is older than 4.0.1 because 4.0.1 implies 4.0.1.0.0. Also 4.0.1_rc2 is newer than 4.0.1_rc1.

Parameters
  • version1 – str First version to compare

  • version2 – str Second version to compare

Returns

int 1 if version1 > version2 -1 if version1 < version2 0 if version1 == version2 0 if wildcards allow version ranges to overlay. E.g. 4.* vs. 4.1.5 returns 0 (equal)

omfit_classes.utils_base.find_latest_version(versions)[source]

Given a list of strings with version numbers like 1.2.12, 1.2, 1.20.5, 1.2.3.4.5, etc., find the maximum version number. Test with: print(repo.get_tag_version(‘v’))

Parameters

versions – List of strings like [‘1.1’, ‘1.2’, ‘1.12’, ‘1.1.13’]

Returns

A string from the list of versions corresponding to the maximum version number.

omfit_classes.utils_base.check_installed_packages(requirements='/home/fusionbot/jenkins/newweb/omfit/../install/requirements.txt')[source]

Check version of required OMFIT packages

Parameters

requirements – path to the requirements.txt file

Returns

summary dictionary

omfit_classes.utils_base.version_conditions_checker(version, conditions)[source]

Check that a given version passes all version conditions

Parameters
  • version – version to check

  • conditions – conditions to be met (multiple conditions are separated by comma)

Returns

True if all conditions are satisfied, otherwise False

omfit_classes.utils_base.summarize_installed_packages(required=True, optional=True, verbose=True)[source]
Parameters
  • required – report on required packages

  • optional – report on optional packages

  • verbose – print to console

Returns

status code and text

omfit_classes.utils_base.installed_packages_summary_as_text(installed_packages)[source]

Return a formatted string of the dictionary returned by check_installed_packages()

Parameters

installed_packages – dictionary generated by check_installed_packages()

Returns

text representation of the check_installed_packages() dictionary

omfit_classes.utils_base.purge_omfit_temporary_files()[source]

Utility function to purge OMFIT temporary files

utils

exception utils.EmailException[source]

Bases: Exception

An Exception class to be raised by a send_email function

utils.send_email(to='', cc='', fromm='', subject='', message='', attachments=None, server=None, port=None, username=None, password=None, isTls=False, quiet=True)[source]

Send an email, using localhost as the smtp server.

Parameters
  • to – must be one of 1) a single address as a string 2) a string of comma separated multiple address 3) a list of string addresses

  • fromm – String

  • subject – String

  • message – String

  • attachments – List of path to files

  • server – SMTP server

  • port – SMTP port

  • username – SMTP username

  • password – SMTP password

  • isTls – Puts the connection to the SMTP server into TLS mode

Returns

string that user can decide to print to screen

class utils.AskPassGUI(credential, force_ask=False, parent=None, title='Enter password', OTP=False, **kw)[source]

Bases: utils_tk.Toplevel, omfit_classes.utils_base.AskPass

Class that builds a Tk GUI to ask password and one-time-password secret if credential file does not exists

Parameters
  • credential – credential filename to look for under the OMFITsettingsDir+os.sep+’credentials’ directory

  • force_ask – force asking for the password (if file exists pwd is shown for amendment)

  • OTP – ask for one-time-password secret; if ‘raw’ do not pass through pyotp.now()

  • parent – parent GUI

  • title – title of the GUI

  • **kw – extra arguments passed to tk.Toplevel

ask(pwd, otp, OTP, store, **kw)[source]

Ask user for the password and one-time-password secret

Parameters
  • pwd – password

  • otp – one time password

  • OTP – ask for one-time-password secret; if ‘raw’ do not pass through pyotp.now()

  • store – save pwd + otp to credential file

exception utils.LicenseException[source]

Bases: Exception

An Exception class to be raised by a License object

class utils.License(codename, fn, email_dict={}, web_address=None, rootGUI=None)[source]

Bases: object

The purpose of this class is to have a single point of interaction with a given license. After the License is initialized, then it should be checked when the given code (or member of a suite) is to be run.

All licenses are stored in $HOME/.LICENCES/<codename>

Parameters
  • codename – The name of the code (or suite) to which the license applies

  • fn – The location (filename) of the software license

  • email_dict

    (optional) At least two members

    • email_address - The address(es) to which the email should be sent,

      if multiple, as a list

    • email_list - A message describing any lists to which the user should

      be added for email notifications or discussion

  • rootGUI – tkInter parent widget

  • web_address – (optional) A URL for looking up the code license

check()[source]

Check if license was accepted and is up-to-date. If not up-to-date ask to accept new license, but do not send email/web-browser and such.

Returns

whether the licence was accepted and up-to-date

present_license()[source]

Show the license as a read-only scrolled text

Returns

whether the user accepts the licence

utils.getIP_lat_lon(ip, verbose=True, access_key='9bf65b672b3903d324044f1efc4abbd1')[source]

Connect to the ipstack web service to get geolocation info from a list of IP addresses.

Parameters
  • ip – single IP string, list of IPs, or dictionary with IPs as keys

  • verbose – print info to screen as it gets fetched

  • access_keyhttps://ipstack.com acecss key

Returns

dictionary with IPs as string with location information

utils.generate_map(lats=[], lons=[], wesn=None, s=100, **kw)[source]

Using the basemap matplotlib toolkit, this function generates a map and puts a markers at the location of every latitude and longitude found in the list

Parameters
  • lats – list of latitude floats

  • lons – list of longitude floats

  • wesn – list of 4 floats to clip map west, east, south, north

  • s – size of the markers

  • **kw – other arguments passed to the scatter plot

Returns

mpl_toolkits.basemap.Basemap object

utils.clean_docstring_for_help(string_or_function_in, remove_params=True, remove_deep_indentation=True)[source]

Processes a function docstring so it can be used as the help tooltip for a GUI element without looking awkward. Protip: you can test this function on its own docstring.

Example usage:

def my_function():
    '''
    This docstring would look weird as a help tooltip if used directly (as in help=my_function.__doc__).
    The line breaks and indentation after line breaks will not be appropriate.
    Also, a long line like this will be broken automatically when put into a help tooltip, but it won't be indented.
    However, putting it through clean_docstring_for_help() will solve all these problems and the re-formatted text
    will look better.
    '''
    print('ran my_function')
    return 0
OMFITx.Button("Run my_function", my_function, help=clean_docstring_for_help(my_function))
Parameters
  • string_or_function_in – The string to process, expected to be either the string stored in function.__doc__ or just the function itself (from which .__doc__ will be read). Also works with OMFITpythonTask, OMFITpythonGUI, and OMFITpythonPlot instances as input.

  • remove_params – T/F: Only keep the docstring up to the first instance of ” :param ” or ” :return ” as extended information about parameters might not fit well in a help tooltip.

  • remove_deep_indentation – T/F: True: Remove all of the spaces between a line break and the next non-space character and replace with a single space. False: Remove exactly n spaces after a line break, where n is the indentation of the first line (standard dedent behavior).

Returns

Cleaned up string without spacing or line breaks that might look awkward in a GUI tooltip.

utils.numeric_type_subclasser(binary_function='__binary_op__', unary_function='__unary_op__')[source]

This is a utility function to list the methods that need to be defined for a class to behave like a numeric type in python 3. This used to be done by the __coerce__ method in Python 2, but that’s no more available.

Parameters
  • binary_function – string to be used for binary operations

  • unary_function – string to be used for unary operations

utils_widgets

utils_widgets.treeText(inv, strApex=False, n=- 1, escape=True)[source]

Returns the string that is shown in the OMFITtree for the input object inv

Parameters
  • inv – input object

  • strApex – should the string be returned with apexes

  • n – maximum string length

  • escape – escape backslash characters

Returns

string representation of object inv

utils_widgets.ctrlCmd()[source]
utils_widgets.short_str(inv, n=- 1, escape=True, snip='[...]')[source]
Parameters
  • inv – input string

  • n – maximum length of the string (negative is unlimited)

  • escape – escape backslash characters

  • snip – string to replace the central part of the input string

Returns

new string

utils_widgets.tkStringEncode(inv)[source]

Tk requires ‘ ‘, ‘', ‘{‘ and ‘}’ characters to be escaped Use of this function dependes on the system and the behaviour can be set on a system-by-system basis using the OMFIT_ESCAPE_TK_SPACES environmental variable

Parameters

inv – input string

Returns

escaped string

utils_widgets.OMFITfont(weight='', size=0, family='', slant='')[source]

The first time that OMFITfont is called, the system defaults are gathered

Parameters
  • weight – ‘normal’ or ‘bold’

  • size – positive or negative number relative to the tkinter default

  • family – family font

Returns

tk font object to be used in tkinter calls

utils_widgets.tk_center(win, parent=None, width=None, height=None, xoff=None, yoff=None, allow_negative=False)[source]

Function used to center a tkInter GUI

Note: by using this function tk does not manage the GUI geometry which is beneficial when switching between desktop on some window managers, which would otherwise re-center all windows to the desktop.

Parameters
  • win – window to be centered

  • parent – window with respect to be centered (center with respect to screen if None)

  • xoff – x offset in pixels

  • yoff – y offset in pixels

  • allow_negative – whether window can be off the left/upper part of the screen

Width

set this window width

Height

set this window height

class utils_widgets.HelpTip(widget=None, move=False)[source]

Bases: object

showtip(text, location=None, mode=None, move=True, strip=False)[source]

Display text in helptip window

hidetip(event=None)[source]
movetip(event=None)[source]
changeMode(event=None)[source]
class utils_widgets.ToolTip(widget)[source]

Bases: object

Tooltip recipe from http://www.voidspace.org.uk/python/weblog/arch_d7_2006_07_01.shtml#e387

static createToolTip(widget, text)[source]
showtip(text)[source]

Display text in tooltip window

hidetip()[source]
utils_widgets.dialog(message='Are You Sure?', answers=['Yes', 'No'], icon='question', title=None, parent=None, options=None, entries=None, **kw)[source]

Display a dialog box and wait for user input

Note:

  • The first answer is highlighted (for users to quickly respond with <Return>)

  • The last answer is returned for when users press <Escape> or close the dialog

Parameters
  • message – the text to be written in the label

  • answers – list of possible answers

  • icon – “question”, “info”, “warning”, “error”

  • title – title of the frame

  • parent – tkinter window to which the dialog will be set as transient

  • options – dictionary of True/False options that are displayed as checkbuttons in the dialog

  • entries – dictionary of string options that are displayed as entries in the dialog

Returns

return the answer chosen by the user (a dictionary if options keyword was passed)

utils_widgets.pickTreeLocation(startLocation=None, title='Pick tree location ...', warnExists=True, parent=None)[source]

Function which opens a blocking GUI to ask user to pick a tree location

Parameters
  • startLocation – starting location

  • title – title to show in the window

  • warnExists – warn user if tree location is already in use

  • parent – tkinter window to which the dialog will be set as transient

Returns

the location picked by the user

utils_widgets.password_gui(title='Password', parent=None, key=None)[source]

Present a password dialog box

Parameters
  • title – The title for the dialog box

  • parent – The GUI parent

  • key – A key for caching the password in this session

class utils_widgets.eventBindings[source]

Bases: object

add(description, widget, event, callback, tag=None)[source]
remove_widget(widget)[source]
set(description, event)[source]
get(description)[source]
print_event_in_bindings(w, event, ind)[source]
printAll()[source]
utils_widgets.screen_geometry()[source]

Function returns the screen geometry

Returns

tuple with 4 integers for width, height, x, y

utils_widgets.wmctrl()[source]

This function is useful when plugging in a new display in OSX and the OMFIT window disappear To fix the issue, go to the XQuartz application and select the OMFIT window from the menu Window > OMFIT … Then press F8 a few times until the OMFIT GUI appears on one of the screens.

utils_widgets.TKtopGUI(item)[source]

Function to reach the TopLevel tk from a GUI element

Parameters

item – tk GUI element

Returns

TopLevel tk GUI

utils_tk

utils_tk.get_entry_fieldbackground()[source]
class utils_tk.Combobox(*args, **kwargs)[source]

Bases: tkinter.ttk.Combobox

Monkey patch of ttk combobox to dynamically update its dropdown menu. The issue is ttk uses a tk Listbox that doesn’t conform to the ttk theme for its dropdown (not cool ttk). This patch is modified from https://stackoverflow.com/questions/43086378/how-to-modify-ttk-combobox-fonts

Construct a Ttk Combobox widget with the parent master.

STANDARD OPTIONS

class, cursor, style, takefocus

WIDGET-SPECIFIC OPTIONS

exportselection, justify, height, postcommand, state, textvariable, values, width

configure(cnf=None, **kw)[source]

Configure resources of a widget. Overridden!

The values for resources are specified as keyword arguments. To get an overview about the allowed keyword arguments call the method keys.

config(cnf=None, **kw)

Configure resources of a widget. Overridden!

The values for resources are specified as keyword arguments. To get an overview about the allowed keyword arguments call the method keys.

utils_tk.omfit_sel_handle(offset, length, selection=None, type=None)[source]

This function must return the contents of the selection. The function will be called with the arguments OFFSET and LENGTH which allows the chunking of very long selections. The following keyword parameters can be provided: selection - name of the selection (default PRIMARY), type - type of the selection (e.g. STRING, FILE_NAME).

Parameters
  • offset – allows the chunking of very long selections

  • length – allows the chunking of very long selections

  • selection – name of the selection (default set by $OMFIT_CLIPBOARD_SELECTION)

  • type – type of the selection (default set by $OMFIT_CLIPBOARD_TYPE)

Returns

clipboard selection

class utils_tk.Toplevel(*args, **kw)[source]

Bases: tkinter.Toplevel

Patch tk.Toplevel to get windows with ttk themed backgrounds.

Construct a toplevel widget with the parent MASTER.

Valid resource names: background, bd, bg, borderwidth, class, colormap, container, cursor, height, highlightbackground, highlightcolor, highlightthickness, menu, relief, screen, takefocus, use, visual, width.

class utils_tk.Canvas(*args, **kw)[source]

Bases: tkinter.Canvas

Patch tk.Canvas to get windows with ttk themed backgrounds.

Construct a canvas widget with the parent MASTER.

Valid resource names: background, bd, bg, borderwidth, closeenough, confine, cursor, height, highlightbackground, highlightcolor, highlightthickness, insertbackground, insertborderwidth, insertofftime, insertontime, insertwidth, offset, relief, scrollregion, selectbackground, selectborderwidth, selectforeground, state, takefocus, width, xscrollcommand, xscrollincrement, yscrollcommand, yscrollincrement.

class utils_tk.Menu(*args, **kw)[source]

Bases: tkinter.Menu

Patch tk.Menu to get windows with ttk themed backgrounds.

Construct menu widget with the parent MASTER.

Valid resource names: activebackground, activeborderwidth, activeforeground, background, bd, bg, borderwidth, cursor, disabledforeground, fg, font, foreground, postcommand, relief, selectcolor, takefocus, tearoff, tearoffcommand, title, type.

class utils_tk.Listbox(*args, **kw)[source]

Bases: tkinter.Listbox

Patch tk.Listbox to get windows with ttk themed backgrounds.

Construct a listbox widget with the parent MASTER.

Valid resource names: background, bd, bg, borderwidth, cursor, exportselection, fg, font, foreground, height, highlightbackground, highlightcolor, highlightthickness, relief, selectbackground, selectborderwidth, selectforeground, selectmode, setgrid, takefocus, width, xscrollcommand, yscrollcommand, listvariable.

class utils_tk.Entry(*args, **kw)[source]

Bases: tkinter.Entry

Construct an entry widget with the parent MASTER.

Valid resource names: background, bd, bg, borderwidth, cursor, exportselection, fg, font, foreground, highlightbackground, highlightcolor, highlightthickness, insertbackground, insertborderwidth, insertofftime, insertontime, insertwidth, invalidcommand, invcmd, justify, relief, selectbackground, selectborderwidth, selectforeground, show, state, takefocus, textvariable, validate, validatecommand, vcmd, width, xscrollcommand.

select_all(event)[source]

Set selection on the whole text

undo(event=None)[source]
redo(event=None)[source]
add_changes(event=None)[source]
class utils_tk.Text(*args, **kw)[source]

Bases: tkinter.Text

Construct a text widget with the parent MASTER.

STANDARD OPTIONS

background, borderwidth, cursor, exportselection, font, foreground, highlightbackground, highlightcolor, highlightthickness, insertbackground, insertborderwidth, insertofftime, insertontime, insertwidth, padx, pady, relief, selectbackground, selectborderwidth, selectforeground, setgrid, takefocus, xscrollcommand, yscrollcommand,

WIDGET-SPECIFIC OPTIONS

autoseparators, height, maxundo, spacing1, spacing2, spacing3, state, tabs, undo, width, wrap,

select_all(event)[source]
class utils_tk.TtkScale(master=None, **kwargs)[source]

Bases: tkinter.ttk.Frame

Construct a Ttk Frame with parent master.

STANDARD OPTIONS

class, cursor, style, takefocus

WIDGET-SPECIFIC OPTIONS

borderwidth, relief, padding, width, height

convert_to_pixels(value)[source]
display_value(value)[source]
place_ticks()[source]
on_configure(event)[source]

Redisplay the ticks and the label so that they adapt to the new size of the scale.

utils_tk.help(*args, **kw)[source]

Classes

omfit_accome

class omfit_classes.omfit_accome.OMFITaccomeEquilibrium(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

Class used to interface equilibrium files generated by ACCOME

Parameters
  • filename – filename passed to OMFITascii class

  • **kw – keyword dictionary passed to OMFITascii class

load()[source]

omfit_ascii

class omfit_classes.omfit_ascii.OMFITascii(filename, **kw)[source]

Bases: omfit_classes.omfit_path.OMFITpath

OMFIT class used to interface with ASCII files

Parameters
  • filename – filename passed to OMFITobject class

  • fromString – string that is written to file

  • **kw – keyword dictionary passed to OMFITobject class

read()[source]

Read ASCII file and return content

Returns

string with content of file

write(value)[source]

Write string value to ASCII file

Parameters

value – string to be written to file

Returns

string with content of file

append(value)[source]

Append string value to ASCII file

Parameters

value – string to be written to file

Returns

string with content of file

class omfit_classes.omfit_ascii.OMFITexecutionDiagram(*args, **kwargs)[source]

Bases: omfit_classes.omfit_ascii.OMFITascii, omfit_classes.sortedDict.SortedDict

load()[source]
print_sorted(by='time')[source]

omfit_asciitable

class omfit_classes.omfit_asciitable.OMFITasciitable(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

OMFIT class used to interface with ascii tables files

This class makes use of the asciitable Python module http://cxc.harvard.edu/contrib/asciitable/

Files can have a header preceeding the talbe (e.g. comment) and will be stored in the [‘header’] field

NOTE: the [‘data’] field is a np.recarray and the data in the tables columns can be retrieved with the names defined in the [‘columns’] field

Parameters
  • filename – filename passed to OMFITascii class

  • skipToLine – (integer) line to skip to before doing the parsing

  • **kw – keyword dictionary passed to OMFITascii class

load()[source]

Method used to load the content of the file specified in the .filename attribute

Returns

None

save()[source]

Method used to save the content of the object to the file specified in the .filename attribute

Returns

None

omfit_aurora

Provides classes and utility functions for easily using Aurora within OMFIT. Documentation: https://aurora-fusion.readthedocs.io/

class omfit_classes.omfit_aurora.OMFITaurora(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.startup_framework.OMFITobject

OMFIT class used to interface with Aurora simulation files.

Parameters
  • filename – filename passed to OMFITobject class

  • **kw – keyword dictionary passed to OMFITobject class

keys()[source]

returns the sorted list of keys in the dictionary

Parameters
  • filter – regular expression for filtering keys

  • matching – boolean to indicate whether to return the keys that match (or not)

Returns

list of keys

items()a set-like object providing a view on D’s items[source]
load()[source]
save()[source]

The save method is supposed to be overridden by classes which use OMFITobject as a superclass. If left as it is this method can detect if .filename was changed and if so, makes a copy from the original .filename (saved in the .link attribute) to the new .filename

omfit_base

class omfit_classes.omfit_base.OMFITtree(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict

A branch in the tree is represented in the filesystem as a directory. Note that the OMFIT object itself belongs to the OMFITmainTree class, which is a subclass of the OMFITtree class.

Parameters
  • filename – ‘directory/bla/OMFITsave.txt’ or ‘directory/bla.zip’ where the OMFITtree will be saved (if ‘’ it will be saved in the same folder of the parent OMFITtree)

  • only – list of strings used to load only some of the branches from the tree (eg. [“[‘MainSettings’]”,”[‘myModule’][‘SCRIPTS’]”]

  • modifyOriginal – by default OMFIT will save a copy and then overwrite previous save only if successful. If modifyOriginal=True and filename is not .zip, will write data directly at destination, which will be faster but comes with the risk of deleting a good save if the new save fails for some reason

  • readOnly – will place entry in OMFITsave.txt of the parent so that this OMFITtree can be loaded, but will not save the actual content of this subtree. readOnly=True is meant to be used only after this subtree is deployed where its fileneme says it will be. Using this feature could result in much faster projects save if the content of this tree is large.

  • quiet – Verbosity level

  • developerMode – load OMFITpython objects within the tree as modifyOriginal

  • serverPicker – take server/tunnel info from MainSettings[‘SERVER’]

  • remote – access the filename in the remote directory

  • server – if specified the file will be downsync from the server

  • tunnel – access the filename via the tunnel

  • **kw – Extra keywords are passed to the SortedDict class

addBranchPath(location, leaf=[], upTo=None)[source]

Creates a path in the tree, without overwriting branches which already exist

Parameters
  • location – string containing the path to be added

  • leaf – Value to set the destination if not present

  • upTo – location is traversed up to upTo

e.g. OMFIT[‘branch’].addBranchPath(“[‘branch2’][‘branch3’]”)

duplicate(filename='', modifyOriginal=False, readOnly=False, quiet=True)[source]

Similarly to the duplicate method for OMFITobjects, this method makes a copy by files. This means that the returned subtree objects will be pointing to different files from the one of the original object. This is to be contrasted to a deepcopy of an object, which copies the objects in memory, but does not duplicate the objects themselves.

Parameters
  • filename – if filename=’’ then the duplicated subtree and its files will live in the OMFIT working directory, if filename=’directory/OMFITsave.txt’ then the duplicated subtree and its files will live in directory specified

  • modifyOriginal – only if filename!=’’ by default OMFIT will save a copy and then overwrite previous save only if successful. If modifyOriginal=True and filename is not .zip, will write data directly at destination, which will be faster but comes with the risk of deleting a good save if the new save fails for some reason

  • readOnly – only if filename!=’’ will place entry in OMFITsave.txt of the parent so that this OMFITtree can be loaded, but will not save the actual content of this subtree. readOnly=True is meant to be used only after this subtree is deployed where its fileneme says it will be. Using this feature could result in much faster projects save if the content of this tree is large.

  • quiet – Verbosity level

Returns

new subtree, with objects pointing to different files from the one of the original object

NOTE: readOnly+modifyOriginal is useful because one can get significant read (modifyOriginal) and write (readOnly) speed-ups,

but this feature relies on the users pledging they will not modify the content under this subtree.

duplicateGUI(initial=None, modifyOriginal=False, readOnly=False, quiet=True)[source]
deploy(filename='', zip=False, quiet=False, updateExistingDir=False, serverPicker=None, server='localhost', tunnel='', s3bucket=None, ignoreReturnCode=False)[source]

Writes the content of the branch on the filesystem

Parameters
  • filename – contains all of the information to reconstruct the tree and can be loaded with the load() function

  • zip – whether the deploy should occur as a zip file

  • updateExistingDir – if not zip and not onlyOMFITsave this option does not delete original directory but just updates it

  • serverPicker – take server/tunnel info from MainSettings[‘SERVER’]

  • server – server to which to upload the file

  • tunnel – tunnel to connect to the server

  • s3bucket – name of s3 bucket to upload to

  • ignoreReturnCode – ignore return code of rsync command

deployGUI(filename='', **kw)[source]

Opens GUI for .deploy() method

Parameters

initial – Starting filename used in the GUI. If None, then the last browsed directory is used (OMFITaux[‘lastBrowsedDirectory’])

Returns

if deployment was successful

load(filename=None, only=None, modifyOriginal=False, readOnly=False, quiet=False, developerMode=False, lazyLoad=False)[source]

method for loading OMFITtree from disk

Parameters
  • filename – ‘directory/bla/OMFITsave.txt’ or ‘directory/bla.zip’ where the OMFITtree will be saved (if ‘’ it will be saved in the same folder of the parent OMFITtree)

  • only – list of strings used to load only some of the branches from the tree (eg. [“[‘MainSettings’]”,”[‘myModule’][‘SCRIPTS’]”]

  • modifyOriginal – by default OMFIT will save a copy and then overwrite previous save only if successful. If modifyOriginal=True and filename is not .zip, will write data directly at destination, which will be faster but comes with the risk of deleting a good save if the new save fails for some reason

  • readOnly – will place entry in OMFITsave.txt of the parent so that this OMFITtree can be loaded, but will not save the actual content of this subtree. readOnly=True is meant to be used only after this subtree is deployed where its fileneme says it will be. Using this feature could result in much faster projects save if the content of this tree is large.

  • quiet – Verbosity level

  • developerMode – load OMFITpython objects within the tree as modifyOriginal

  • lazyLoad – enable/disable lazy load of picklefiles and xarrays

close()[source]

Recursively calls the .close() method on OMFITobjects and OMFITtree objects

populate_from_directory(dirname, extensions={}, **kw)[source]

Load files from a directory maintaning directory structure

Parameters
  • dirname – directory path

  • extensions – dictionary mapping filename expression to OMFIT classes, for example: {‘.dat’: ‘OMFITnamelist’, ‘.xml’: ‘OMFITxml’}

class omfit_classes.omfit_base.OMFITlist(iterable=(), /)[source]

Bases: list

list of which the individual values are saved to disk as it is done for the key:value pairs in an OMFITtree object

omfit_classes.omfit_base.module_selective_deepcopy(location, classes)[source]

Function that returns the equivalent of a module copy.deepcopy but can selectively include entries based on their class

Parameters
  • location – string with OMFIT tree location where the module resides

  • classes – list of strings with allwed classes

Returns

copy of the module containing a deepcopy of the allowed classes

omfit_classes.omfit_base.module_noscratch_deepcopy(module_root)[source]

deepcopy of a module (and submodules) but neglecting the scratch ‘__scratch__’ directories

Parameters

module_root – instance of the module to be copied

Returns

module deepcopy

class omfit_classes.omfit_base.OMFITcollection(*args, **kwargs)[source]

Bases: omfit_classes.omfit_base.OMFITtree

A class for holding sets of similar objects

Parameters
  • selector – sets selection of a specific realization (None for all realizations, ‘random’ for a random realization). This can also be an OMFITexpression.

  • strategy – sets operation to be performed on all realizations (if .selector == None)

  • raise_errors

    sets how to proceed in case of errors (eg. missing objects, attributes)

    • None: print warning on errors

    • True: raise errors

    • False: ignore errors

  • no_subdir

    This keyword affects how the OMFITcollection is deployed.

    • False (default) the OMFITcollection is deployed like a normal OMFITtree, such that the files of the

      collection are deployed under a subdirectory, whose name comes from the entry in the tree

    • True the files are deployed in the same level directory as the parent

.selector, .strategy, .raise_errors can be modified after the object is instantiated

>> tmp=OMFITcollection(raise_errors=True) >> for k in range(1,10): >> tmp[k]={} >> tmp[k][‘hello’]=np.linspace(0,1,100)**k >> >> #return a single realization >> tmp.selector=5 >> tmp.strategy=None >> pyplot.plot(tmp[‘hello’],’–r’,lw=2) >> >> #return all realizations >> tmp.selector=None >> tmp.strategy=None >> plotc(tmp[‘hello’]) >> >> #attribute on all realizations >> tmp.selector=None >> tmp.strategy=None >> print(tmp[‘hello’].mean()) >> >> #perform operation on all realizations >> tmp.selector=None >> tmp.strategy=’np.mean(x,axis=1)’ >> pyplot.plot(tmp[‘hello’],’k–’,lw=2) >> >> OMFIT[‘variable’]=3 >> tmp.selector=OMFITexpression(“OMFIT[‘variable’]”) >> print(tmp) >> >> print(tmp.GET(3)[‘hello’]-tmp[‘hello’]) >> >> # to update values, you can use the UPDATE method >> tmp.UPDATE(location=”[‘hello’][0]”,values=-1e6) >> tmp.selector=None >> tmp.strategy=None >> print(tmp[‘hello’].mean())

Parameters
  • filename – ‘directory/bla/OMFITsave.txt’ or ‘directory/bla.zip’ where the OMFITtree will be saved (if ‘’ it will be saved in the same folder of the parent OMFITtree)

  • only – list of strings used to load only some of the branches from the tree (eg. [“[‘MainSettings’]”,”[‘myModule’][‘SCRIPTS’]”]

  • modifyOriginal – by default OMFIT will save a copy and then overwrite previous save only if successful. If modifyOriginal=True and filename is not .zip, will write data directly at destination, which will be faster but comes with the risk of deleting a good save if the new save fails for some reason

  • readOnly – will place entry in OMFITsave.txt of the parent so that this OMFITtree can be loaded, but will not save the actual content of this subtree. readOnly=True is meant to be used only after this subtree is deployed where its fileneme says it will be. Using this feature could result in much faster projects save if the content of this tree is large.

  • quiet – Verbosity level

  • developerMode – load OMFITpython objects within the tree as modifyOriginal

  • serverPicker – take server/tunnel info from MainSettings[‘SERVER’]

  • remote – access the filename in the remote directory

  • server – if specified the file will be downsync from the server

  • tunnel – access the filename via the tunnel

  • **kw – Extra keywords are passed to the SortedDict class

property selector
property filename
property strategy
property raise_errors
pload(nprocs, no_mpl_pledge=True, **kw)[source]

Parallel loading of all entries in an OMFITcollection.

Param

nprocs: the number of processes to use while loading. (Might be decreased to prevent memory overage.

Param

no_mpl_pledge: pledge that the load function does not use matplotlib. Its probably not needed anymore.

KEYS()[source]

Returns list of keys within the OMFITcollection, regardless of the value of the .selector. This is equivalent to self.keys() when self.selector=None.

SET(key, value)[source]

Writes the key entry in the collection dictionary, regardless of the value of the .selector. This is equivalent to self[key]=value when self.selector=None.

Parameters
  • key – entry of the collection

  • value – value to be given

GET(key)[source]

Returns the key entry from the collection dictionary, regardless of the value of the .selector. This is equivalent to self[key] when self.selector=None.

Parameters

key – entry of the collection

CONTAINS(key)[source]

Returns whether key is in the collection dictionary, regardless of the value of the .selector. This is equivalent to key in self when self.selector=None.

Parameters

key – entry of the collection

UPDATE(location, values, verbose=False)[source]

Update the location of the ith key of self.KEYS() with the ith value of values

Parameters
  • location – A string indicating the the common part that will be updated

  • values – An iterable of the same length as self.KEYS() or a single value to be broadcast to all keys

Returns

None

Example::

coll.UPDATE(location=”[‘SAT_RULE’]”,values=[1]*len(coll.KEYS())) coll.UPDATE(location=”[‘SAT_RULE’]”,values=1)

class omfit_classes.omfit_base.OMFITmcTree(*args, **kwargs)[source]

Bases: omfit_classes.omfit_base.OMFITcollection

A class for holding results from a Monte Carlo set of runs

Effectively this is a OMFITcollection class with default strategy set to: uarray(np.mean(x,1),np.std(x,1))

Parameters
  • filename – ‘directory/bla/OMFITsave.txt’ or ‘directory/bla.zip’ where the OMFITtree will be saved (if ‘’ it will be saved in the same folder of the parent OMFITtree)

  • only – list of strings used to load only some of the branches from the tree (eg. [“[‘MainSettings’]”,”[‘myModule’][‘SCRIPTS’]”]

  • modifyOriginal – by default OMFIT will save a copy and then overwrite previous save only if successful. If modifyOriginal=True and filename is not .zip, will write data directly at destination, which will be faster but comes with the risk of deleting a good save if the new save fails for some reason

  • readOnly – will place entry in OMFITsave.txt of the parent so that this OMFITtree can be loaded, but will not save the actual content of this subtree. readOnly=True is meant to be used only after this subtree is deployed where its fileneme says it will be. Using this feature could result in much faster projects save if the content of this tree is large.

  • quiet – Verbosity level

  • developerMode – load OMFITpython objects within the tree as modifyOriginal

  • serverPicker – take server/tunnel info from MainSettings[‘SERVER’]

  • remote – access the filename in the remote directory

  • server – if specified the file will be downsync from the server

  • tunnel – access the filename via the tunnel

  • **kw – Extra keywords are passed to the SortedDict class

get_mean_std(path)[source]
class omfit_classes.omfit_base.OMFITstorage(*args, **kwargs)[source]

Bases: omfit_classes.omfit_base.OMFITtree

Parameters
  • filename – ‘directory/bla/OMFITsave.txt’ or ‘directory/bla.zip’ where the OMFITtree will be saved (if ‘’ it will be saved in the same folder of the parent OMFITtree)

  • only – list of strings used to load only some of the branches from the tree (eg. [“[‘MainSettings’]”,”[‘myModule’][‘SCRIPTS’]”]

  • modifyOriginal – by default OMFIT will save a copy and then overwrite previous save only if successful. If modifyOriginal=True and filename is not .zip, will write data directly at destination, which will be faster but comes with the risk of deleting a good save if the new save fails for some reason

  • readOnly – will place entry in OMFITsave.txt of the parent so that this OMFITtree can be loaded, but will not save the actual content of this subtree. readOnly=True is meant to be used only after this subtree is deployed where its fileneme says it will be. Using this feature could result in much faster projects save if the content of this tree is large.

  • quiet – Verbosity level

  • developerMode – load OMFITpython objects within the tree as modifyOriginal

  • serverPicker – take server/tunnel info from MainSettings[‘SERVER’]

  • remote – access the filename in the remote directory

  • server – if specified the file will be downsync from the server

  • tunnel – access the filename via the tunnel

  • **kw – Extra keywords are passed to the SortedDict class

class omfit_classes.omfit_base.OMFITtreeCompressed(input, **kw)[source]

Bases: omfit_classes.startup_framework.OMFITobject

class omfit_classes.omfit_base.OMFITmodule(*args, **kwargs)[source]

Bases: omfit_classes.omfit_base.OMFITtree

Class for OMFIT modules

Parameters
  • filename – None: create new module from skeleton, ‘’: create an empty module

  • filename – ‘directory/bla/OMFITsave.txt’ or ‘directory/bla.zip’ where the OMFITtree will be saved (if ‘’ it will be saved in the same folder of the parent OMFITtree)

  • only – list of strings used to load only some of the branches from the tree (eg. [“[‘MainSettings’]”,”[‘myModule’][‘SCRIPTS’]”]

  • modifyOriginal – by default OMFIT will save a copy and then overwrite previous save only if successful. If modifyOriginal=True and filename is not .zip, will write data directly at destination, which will be faster but comes with the risk of deleting a good save if the new save fails for some reason

  • readOnly – will place entry in OMFITsave.txt of the parent so that this OMFITtree can be loaded, but will not save the actual content of this subtree. readOnly=True is meant to be used only after this subtree is deployed where its fileneme says it will be. Using this feature could result in much faster projects save if the content of this tree is large.

  • quiet – Verbosity level

  • developerMode – load OMFITpython objects within the tree as modifyOriginal

  • serverPicker – take server/tunnel info from MainSettings[‘SERVER’]

  • remote – access the filename in the remote directory

  • server – if specified the file will be downsync from the server

  • tunnel – access the filename via the tunnel

  • **kw – Extra keywords are passed to the SortedDict class

property settings_class
property ID
property edited_by
property date
property description
property contact
property defaultGUI
property commit
property comment
moduleDict(onlyModuleID=None, level=- 1)[source]

Returns a dictionary of currently loaded modules

Parameters
  • onlyModuleID – string or list of strings of module ID to search for

  • level – how many modules deep to go

The dictionary contains the modules settings for each of the modules

store(runid, metadata={})[source]

Method to store a snapshot of the current module status and save it under self[‘__STORAGE__’][runid] where runid is set under self[‘SETTINGS’][‘EXPERIMENT’][‘runid’]

Parameters

runid – runid to be de-stored. If None the runid is taken from self[‘SETTINGS’][‘EXPERIMENT’][‘runid’]

restore(runid, restoreScripts=None)[source]

Method to restore a snapshot of the current module status as it was saved under self[‘__STORAGE__’][runid]

Parameters

runid – runid to be restored. If None the runid is taken from self[‘SETTINGS’][‘EXPERIMENT’][‘runid’]

destore(runid)[source]

Method to de-store a snapshot of the current module status as it was saved under self[‘__STORAGE__’][runid]

Parameters

runid – runid to be de-stored. If None the runid is taken from self[‘SETTINGS’][‘EXPERIMENT’][‘runid’]

static info(filename)[source]

This (static) method returns a dictionary with the module information, including the content of the [‘help’] file

Parameters

filename – module filename

Returns

dictionary with module info

static directories(return_associated_git_branch=False, separator=None, checkIsWriteable=False, directories=None)[source]

return lists of valid modules paths

Parameters
  • return_associated_git_branch – whether to return just the path of each directory or also the remote/branch info. This requires parsing of the OMFIT modules in a directory and it can be quite slow, however the info is buffered, so later accesses are faster.

  • separator – text to use to separate the path and the remote/branch info

  • checkIsWriteable – checks if user has write access. Note: if checkIsWriteable=’git’ will return a directory even if it is not writable, but it is a git repository

  • directories – list of directories to check. If None the list of directories is taken from OMFIT[‘MainSettings’][‘SETUP’][‘modulesDir’]

Returns

list of valid modules directories

static submodules(go_deep=True, directory=None)[source]

This (static) method returns a dictionary with the list of submodules for each of the modules in a directory

Parameters
  • go_deep – include submodules of submodules

  • directory – modules directory to use, by default the one of the repository where OMFIT is running

Returns

dictionary with submodule info for all modules in a directory

saveUserSettings(variant='__default__', locations=["['PHYSICS']"])[source]

Save user settings in ~/.OMFIT/modulesSettings/

Parameters
  • variant – variant name of the user setting to save

  • locations – by default only save [‘PHYSICS’]

loadUserSettings(variant='__default__', diff=False)[source]

Load user settings in ~/.OMFIT/modulesSettings/

Parameters
  • variant – variant name of the user setting to load

  • diff – open a diff GUI to let users choose what to merge

listUserSettings(verbose=False)[source]

List user settings in ~/.OMFIT/modulesSettings/

deleteUserSettings(variant='__default__')[source]

Delete user settings in ~/.OMFIT/modulesSettings/

Parameters

variant – variant name of the user setting to delete

experiment_location(*args)[source]

Method that resolves the OMFITexpressions that are found in a module root[‘SETTINGS’][‘EXPERIMENT’] and returns the location that those expressions points to.

Params *args

list of keys to return the absolute location of

Returns

dictionary with the absolute location the expressions point to

experiment(*args, **kw)[source]

method used to set the value of the entries under root[‘SETTINGS’][‘EXPERIMENT’] This method resolves the OMFITexpressions that are found in a module root[‘SETTINGS’][‘EXPERIMENT’] and sets the value at the location that those expressions points to.

Params **kw

keyword arguments with the values to be set

Returns

root[‘SETTINGS’][‘EXPERIMENT’]

deploy_module(*args, **kw)[source]

Method used to deploy a module in its format for being stored as part of a modules repository

Parameters
  • *args – arguments passed to the deploy method

  • **kw – keywords arguments passed to the deploy method

convert_to_developer_mode(processSubmodules=True, modulesDir='/home/fusionbot/jenkins/newweb/modules', operation='DEVEL', loadNewSettings=True, quiet=False)[source]

Convert scripts in this module to be modifyOriginal versions of the scripts under the moduleDir (so called developer mode)

Parameters
  • processSubmodules – bool convert scripts in the submodules

  • modulesDir – string modules directory to use

  • operation – string One of [‘DEVEL’, ‘RELOAD’, ‘FREEZE’] DEVEL: reload scripts with modifyOriginal = True RELOAD: reload scripts with modifyOriginal = False FREEZE: set modifyOriginal = False

  • loadNewSettings – bool Load new entries in the modules settings or not (ignored if operation==’FREEZE’)

  • quiet – bool Suppress print statements

class omfit_classes.omfit_base.OMFITtmp(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_base._OMFITnoSave

Same class of SortedDict, but is not saved when a project is saved This class is used to define the __scratch__ space under each module as well as the global OMFIT[‘scratch’]

class omfit_classes.omfit_base.OMFITproject(*args, **kwargs)[source]

Bases: omfit_classes.omfit_base.OMFITtree

Similar to OMFITtree class

Parameters
  • filename – ‘directory/bla/OMFITsave.txt’ or ‘directory/bla.zip’ where the OMFITtree will be saved (if ‘’ it will be saved in the same folder of the parent OMFITtree)

  • only – list of strings used to load only some of the branches from the tree (eg. [“[‘MainSettings’]”,”[‘myModule’][‘SCRIPTS’]”]

  • modifyOriginal – by default OMFIT will save a copy and then overwrite previous save only if successful. If modifyOriginal=True and filename is not .zip, will write data directly at destination, which will be faster but comes with the risk of deleting a good save if the new save fails for some reason

  • readOnly – will place entry in OMFITsave.txt of the parent so that this OMFITtree can be loaded, but will not save the actual content of this subtree. readOnly=True is meant to be used only after this subtree is deployed where its fileneme says it will be. Using this feature could result in much faster projects save if the content of this tree is large.

  • quiet – Verbosity level

  • developerMode – load OMFITpython objects within the tree as modifyOriginal

  • serverPicker – take server/tunnel info from MainSettings[‘SERVER’]

  • remote – access the filename in the remote directory

  • server – if specified the file will be downsync from the server

  • tunnel – access the filename via the tunnel

  • **kw – Extra keywords are passed to the SortedDict class

moduleDict(onlyModuleID=None, level=- 1)[source]

Returns a dictionary of currently loaded modules

Parameters
  • onlyModuleID – string or list of strings of module ID to search for

  • level – how many modules deep to go

The dictionary contains the modules settings for each of the modules

static info(filename='')[source]

Returns dictionary with the saved project information.

Parameters

filename – filename of the project to return info about. If filename=’’ then returns info about the current project. Note that projects information are updated only when the project is saved.

Returns

dictionary with project infos

class omfit_classes.omfit_base.OMFIThelp(*args, **kwargs)[source]

Bases: omfit_classes.omfit_ascii.OMFITascii, omfit_classes.sortedDict.SortedDict

Class used to parse OMFIT modules help.rst files

load()[source]
verify()[source]
save()[source]

The save method is supposed to be overridden by classes which use OMFITobject as a superclass. If left as it is this method can detect if .filename was changed and if so, makes a copy from the original .filename (saved in the .link attribute) to the new .filename

txt()[source]
class omfit_classes.omfit_base.OMFITexpression(expression)[source]

Bases: object

This class handles so called OMFIT dynamic expressions.

If you generate dynamic expressions in a Python script, note that the relative location of the expression (root,parent,DEPENDENCIES,…) is evaluated with respect to where the expression is in the tree, not relative to where the script which generated the expression resides

If you are using relative locations in your expression, things may not work if you have the same expression into two locations in the tree. A classic of this happening is if you do a memory copy of something (e.g. namelist) containing an expression to some other location in the tree. If this happens the results are unpredictable.

Parameters

expression – string containing python code to be dynamically evaluated every time an attribute of this object is accessed

bool()[source]
real()[source]
imag()[source]
lt(b)[source]
le(b)[source]
eq(b)[source]
ne(b)[source]
ge(b)[source]
gt(b)[source]
not_()[source]
truth()[source]
is_(b)[source]
is_not(b)[source]
abs()[source]
add(b)[source]
and_(b)[source]
floordiv(b)[source]
index()[source]
inv()[source]
invert()[source]
lshift(b)[source]
mod(b)[source]
mul(b)[source]
matmul(b)[source]
neg()[source]
or_(b)[source]
pos()[source]
pow(b)[source]
rshift(b)[source]
sub(b)[source]
truediv(b)[source]
xor(b)[source]
class omfit_classes.omfit_base.OMFITiterableExpression(expression)[source]

Bases: omfit_classes.omfit_base.OMFITexpression

Subclass of OMFITexpression used for iterable objects The distinction between iterable and not iterable expressions is used in case someone tests for iterability of an object

concat(b)[source]
contains(b)[source]
countOf(b)[source]
delitem(b)[source]
getitem(b)[source]
indexOf(b)[source]
setitem(b, c)[source]
length_hint(default=0)[source]
attrgetter(*attrs)[source]
itemgetter(*items)[source]
methodcaller(name, *args)[source]
class omfit_classes.omfit_base.OMFITlazyLoad(location, tp, filename, tree_repr=None, **kw)[source]

Bases: object

OMFIT class that imports xarray datasets with dynamic loading

load()[source]
save()[source]
save_as(filename)[source]
deploy(filename)[source]
property cls
omfit_classes.omfit_base.relativeLocations(location, dependencies=True)[source]

This function provides a dictionary references to some useful quantities with respect to the object specified in location. Note that the variables in the returned dictionary are the same ones that are available within the namespace of OMFIT scripts and expressions.

Parameters

location – location in the OMFIT tree

Returns

dictionary containing the following variables:

  • OMFITlocation : list of references to the tree items that compose the OMFIT-tree path

  • OMFITlocationName : list of path strings to the tree items that compose the OMFIT-tree path

  • parent : reference to the parent object

  • parentName : path string to the parent object

  • this : reference to the current object

  • thisName : path string to the current object

  • OMFITmodules : list of modules to the current module

  • OMFITmodulesName : list of string paths to the current module

  • MainSettings : reference to OMFIT[‘MainSettings’]

  • MainScratch: reference to OMFIT[‘scratch’]

  • scratch : reference to module scratch

  • scratchName : string path to module scratch

  • root : reference to this module

  • rootName : string path to this module

  • DEPENDENCIES variables defined within the module

omfit_classes.omfit_base.absLocation(location, base, base_is_relativeLocations_output=False)[source]

This method translates relative location strings to absolute location strings in the OMFIT tree

Parameters
  • location – string (or list of strings) with relative/absolute location in the OMFIT tree

  • base – tree object with respect to which the query is made

  • base_is_relativeLocations_output – is the base parameter the output of the relativeLocations() function

Returns

absolute location string (or list of strings) in the OMFIT tree

omfit_classes.omfit_base.isinstance(a, b)[source]

Return whether an object is an instance of a class or of a subclass thereof.

A tuple, as in isinstance(x, (A, B, ...)), may be given as the target to check against. This is equivalent to isinstance(x, A) or isinstance(x, B) or ... etc.

This function is modified to account for special behavior of some OMFIT classes, such as OMFITexpression.

omfit_classes.omfit_base.type(a, *args, **kw)[source]
omfit_classes.omfit_base.hasattr(object, attribute)[source]
class omfit_classes.omfit_base.shotBookmarks(*args, **kwargs)[source]

Bases: omfit_classes.namelist.NamelistFile, omfit_classes.omfit_base._OMFITnoSave

load(*args, **kw)[source]

Load namelist from file.

class omfit_classes.omfit_base.OMFITmainSettings(*args, **kwargs)[source]

Bases: omfit_classes.omfit_namelist.OMFITnamelist

Contains system wide settings

load(*args, **kw)[source]

Method used to load the content of the file specified in the .filename attribute

Returns

None

sort()[source]
Parameters

key – function that returns a string that is used for sorting or dictionary key whose content is used for sorting

>> tmp=SortedDict() >> for k in range(5): >> tmp[k]={} >> tmp[k][‘a’]=4-k >> # by dictionary key >> tmp.sort(key=’a’) >> # or equivalently >> tmp.sort(key=lambda x:tmp[x][‘a’])

Parameters

**kw – additional keywords passed to the underlying list sort command

Returns

sorted keys

omfit_classes.omfit_base.omfit_log(action, details)[source]
omfit_classes.omfit_base.ismodule(module, module_ids)[source]

convenience function to check if an OMFITmodule is one of a given set of IDs

Parameters
  • module – module

  • module_ids – list of module IDs to check for

Returns

True if the module ID matches one of the IDs in the module_ids list

omfit_classes.omfit_base.diffTreeGUI(this, other, thisName='Original', otherName='Compared to', resultName='Final result', title=None, diffSubModules=True, precision=0.0, description=False, tellDescription=False, deepcopyOther=True, skipClasses=(), noloadClasses=(), flatten=False, allow_merge=True, always_show_GUI=False, order=True, favor_my_order=False, modify_order=False)[source]

Function used to compare two dictionary objects and manage their merge

Parameters
  • this – reference object (the one where data will be written to in case of merge)

  • other – secondary object to compare to

  • thisName – [‘Original’] representation of reference object in the GUI

  • otherName – [‘Compared to’] representation of secondary object in the GUI

  • resultName – [‘Final result’] representation of result

  • title – [None] GUI title

  • diffSubModules – [True] do diff of sumbmodules or not. If None only compare module ID and its DEPENDENCIES.

  • precision – [0.0] relative precision to which to compare objects

  • description – [False] commit to be input by the user

  • tellDescription – [False] show description or not

  • deepcopyOther – [True] Whether to perform internally other=copy.deepcopy(other) to avoid diffTreeGUI to modify the original content

  • skipClasses – () tuple containing classes to skip

  • noloadClasses – () tuple containing classes to not load

  • flatten – whether to flatten the data in nested dictionaries

  • allow_merge – whether to allow merging of data

  • always_show_GUI – whether to show GUIs even if there are no differences

  • order – order of the keys matters

  • favor_my_order – favor my order of keys

  • modify_order – update order of input dictionaries based on diff

Returns

(switch,False/True,keys)

  • switch is a dictionary which lists all of the differences

  • False/True will be used to keep track of what the user wants to switch between dictionaries

  • keys is used to keep the keys of the merged dictionary in order

omfit_classes.omfit_base.exportTreeGUI(this, title=None, description=False)[source]
omfit_classes.omfit_base.diffViewer(root, thisFilename=None, otherFilename=None, thisName='Original', otherName='New', title=None, thisString=None, otherString=None)[source]

Present a side by side visual comparison of two strings

Parameters

root – A tk master GUI

omfit_classes.omfit_base.askDescription(parent, txt, label, showInsertDate=False, showHistorySeparate=False, expand=0, scrolledTextKW={})[source]
omfit_classes.omfit_base.evalExpr(inv)[source]

Return the object that dynamic expressions return when evaluated This allows OMFITexpression(‘None’) is None to work as one would expect. Epxressions that are invalid they will raise an OMFITexception when evaluated

Parameters

inv – input object

Returns

  • If inv was a dynamic expression, returns the object that dynamic expressions return when evaluated

  • Else returns the input object

omfit_bibtex

class omfit_classes.omfit_bibtex.OMFITbibtex(*args, **kwargs)[source]

Bases: omfit_classes.omfit_ascii.OMFITascii, omfit_classes.sortedDict.SortedDict

Class used to parse bibtex files The class should be saved as a dictionary or dictionaries (one dictionary for each bibtex entry) Each bibtex entry must have defined the keys: ENTRYTYPE and ID

Parameters
  • filename – filename of the .bib file to parse

  • **kw – keyword dictionary passed to OMFITascii class

To generate list of own publications: 1. Export all of your citations from https://scholar.google.com to a citation.bib bibtex file 2. OMFIT[‘bib’]=OMFITbibtex(‘.bib’) # load citations as OMFITbibtex 3. OMFIT[‘bib’].doi(deleteNoDOI=True) # remove entries which do not have a DOI (ie.conferences) 4. OMFIT[‘bib’].sanitize() # fix entries where needed 4. OMFIT[‘bib’].update_ID(as_author=’Meneghini’) # Sort entries and distinguish between first author or contributed 5. print(‘nn’.join(OMFIT[‘bib’].write_format())) # write to whatever format desired

load()[source]
write_format(form='\\item[]{{{author}; {title}; {journal} {volume} {year}: \\href{{http://dx.doi.org/{doi}}}{{doi:{doi}}}}}')[source]

returns list with entries formatted according to form string

Parameters

form – format to use to

Returns

list of strings

save()[source]

The save method is supposed to be overridden by classes which use OMFITobject as a superclass. If left as it is this method can detect if .filename was changed and if so, makes a copy from the original .filename (saved in the .link attribute) to the new .filename

doi(deleteNoDOI=False)[source]

method for adding DOI information to bibtex

Parameters

deleteNoDOI – delete entries without DOI

sanitize()[source]

Sanitizes the database entries: 1. Fix all-caps author names 2. Fix unicodes

filter(conditions)[source]

filter database given a set of conditions

Parameters

conditions – list of strings (eg. [‘int(year)>2012’]

Returns

filtered OMFITbibtex object

update_ID(fmt='lower1stAuthor_year', separator=':', as_author=False)[source]

set bibtex ID

Parameters
  • fmt – string with format ‘year_1stAuthor_jrnl’

  • separator – string with separator for fmt

  • as_author – only keep entries that have as_author as author

omfit_classes.omfit_bibtex.searchdoi(title, author)[source]

This function returns a list of dictionaries containing the best matching papers for the title and authors according to the crossref.org website

Parameters
  • title – string with the title

  • author – string with the authors

Returns

list of dictionaries with info about the papers found

omfit_boundary

class omfit_classes.omfit_boundary.DI_signal(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict

Principal data class used by _Detachment_Indicator.

Data is fetched as a MDS+ pointname, truncated, ELM filtered, smoothed and remapped to an independent variable. See below for inherited objects with specific implementation needs (eg langmuir probes, DTS …)

Parameters

tag – string a keyword name for this signal. Eg. pointname of MDS+ siganl

fetch(shotnumber, device)[source]

Fetch data from MDS+

Parameters

shotnumber – int Shot identifier

Pram device

string Device name, to be used as server name in creation of MDS+ connection

truncate_times(time_range)[source]

Trim the dataset down by removing values. Helps with speed and to avoid out-of-bounds interpolation errors

Parameters

time_range – list Min and max times for truncation [min, max]

ELM_filter(elm_obj, filter_settings)[source]

Requires that trimmed data already exists.

Parameters

elm_obj – omfit_elm instance

A DI_signal doesn’t need to know the details of the ELM identification that is identified globally for a shot and then shared out to a variety of signals/diagnostics for processing. As such, the ELM filtering is done here using a pre-defined omfit_elm object.

Parameters

filter_settings – Dict dict containing the values to be passed to ELM filtering. See OMFITelm for specification

smooth(smooth_settings)[source]

Smooth the filtered data using some kind

:param smooth_settings” Dict

A dict of settings as required for the particular kinds of smoothing requested in smooth_settings[‘smoothtype’]

remap(remap_signal)[source]

Remap the signal onto an independent variable, which itself is an DI_signal instance

plot()[source]

A basic plotting function of the available datatypes.

plot_elm_check(xrange=None)[source]
class omfit_classes.omfit_boundary.DI_DTS_signal(*args, **kwargs)[source]

Bases: omfit_classes.omfit_boundary.DI_signal

DTS specific methods. Nothing fancy here, just a parsing of the DTS arrays stored in MDS+. It can’t be done with DI_signal as the MDS+ arrays are 2D.

Initialization.

Parameters

tag – string

Reference name for quantity. This will be in the form DTS_x_y where x is the quantity (eg DENS, TEMP) and y is the channel (eg 0 for the channel closest to the plate). Eg ‘DTS_TEMP_0’. Case independent.

fetch(shotnumber, device)[source]

Fetch data from MDS+ and split up to find requested quantity on requested channel.

Parameters

shotnumber – integer

Shotnumber for data fetch

Parameters

device

Server name for MDS+ data fetch. eg ‘DIII-D’

class omfit_classes.omfit_boundary.DI_file_signal(*args, **kwargs)[source]

Bases: omfit_classes.omfit_boundary.DI_signal

A convenient way to make a DI_signal object when the times and values are coming from file. For example a custom-read of some data from an IDL save file. In these cases, it is sometimes easier just to pass the raw data straight to the object and then it will be able to handle the filtering, remapping etc. in a way that is equivalent to the other diagnostics.

Initialize DI object with tag as specified. :param tag: string Tag name for signal, used for identification purposes

fetch(shotnumber, times, data, units='')[source]

No fetching actually occurs for this subclass. In this case, the otherwise-sourced data is simply placed into the attributes that are required by the processing methods in DI_signal.

Parameters

shotnumber – int

Shotnumber for reference purposes

Parameters

times – array-like

List of times corresponding to data [ms].

Parameters

data – array-like

The actual data, 1D array.

Parameters

units – string

Description of the units of data.

class omfit_classes.omfit_boundary.DI_LP_signal(*args, **kwargs)[source]

Bases: omfit_classes.omfit_boundary.DI_signal

Langmuir probe specific methods. Interacts with the Langmuir_Probes module.

Parameters

tag – string

reference name for the signal

fetch(shotnumber, probe_tree, param_name, units, processing_settings=None)[source]

This subclass can’t know anything about where the LP data is being saved in the tree. It can however operate on one of the probe sub trees. That means the Langmuir_Probe module tree needs to be populated first, and then one of the sub trees from its [‘OUTPUT’] can be passed into here, alongside the physical quantity to be extracted from the tree.

Parameters

probe_tree – OMFITTree

a subtree of the Langmuir_Probe module corresponding

Parameters

param_name – string

The name of the physical parameter to be extracted from probe tree (eg. JSAT, DENS, or TEMP).

Parameters

units – OMFITTree

Descriptors of the units for each of the param_name options. This often lives in Langmuir_Toolbox[‘OUTPUTS’][‘LP_MDS’][‘units’]

Parameters

processing_settings – Dict

settings dict containing filter_settings, smoothing_settings, DOD_tmin, DOD_tmax. See process_data.py for full details.

class omfit_classes.omfit_boundary.DI_GW_signal(*args, **kwargs)[source]

Bases: omfit_classes.omfit_boundary.DI_signal

A convenience function for calculating the greenwald fraction

Initialization of greenwald DI_signal :param tag: string reference name for signal

fetch(shotnumber, device, processing_settings)[source]

Fetch the mdsplus values for density,aminor, and ip to calculate the Greenwald fraction

Parameters

shotnumber – integer

The number of the shot of interest

Parameters

device – string

Name of the server to be used for MDS+ calls

Parameters

processing_settings – dict

A nested dictionary of settings to be used for smoothing and ELM filtereing. See process_data.py for more information

class omfit_classes.omfit_boundary.DI_asdex_signal(*args, **kwargs)[source]

Bases: omfit_classes.omfit_boundary.DI_signal

A convenience function for calculating the various kinds of neutral compression from the gauges

Initialization of greenwald DI_signal :param tag: string reference name for signal

fetch(shotnumber, device, processing_settings)[source]

Fetch the mdsplus values for calculation of the neutral compression in the SAS.

Parameters

shotnumber – integer

The number of the shot of interest

Parameters

device – string

Name of the server to be used for MDS+ calls

Parameters

processing_settings – dict

A nested dictionary of settings to be used for smoothing and ELM filtereing. See process_data.py for more information

omfit_bout

class omfit_classes.omfit_bout.OMFITbout(filename, grid=None, tlim=None, **kw)[source]

Bases: omfit_classes.omfit_data.OMFITncDataset

Parameters
  • filename – Path to file

  • lock – Prevent in memory changes to the DataArray entries contained

  • exportDataset_kw – dictionary passed to exportDataset on save

  • data_vars – see xarray.Dataset

  • coords – see xarray.Dataset

  • attrs – see xarray.Dataset

  • **kw – arguments passed to OMFITobject

add_grid(grid)[source]
pol_slice(var3d, n=1, zangle=0.0, info=True)[source]

return 2D data from 3D array with x, y, z dimensions

Parameters
  • var3d – variable to process (string or 3D variable)

  • n – toroidal mode number

  • zangle – toroidal angle

  • info – print info to screent

Returns

2d data

class omfit_classes.omfit_bout.OMFITboutinp(*args, **kwargs)[source]

Bases: omfit_classes.omfit_ini.OMFITini

OMFIT class used to interface with BOUT++ configuration files (INP files).

This class is based on the configobj class https://configobj.readthedocs.io/en/latest/index.html

Parameters
  • filename – filename passed to OMFITascii class

  • **kw – keyword dictionary passed to OMFITascii class

omfit_brainfuse

class omfit_classes.omfit_brainfuse.OMFITbrainfuse(filename, **kw)[source]

Bases: omfit_classes.brainfuse.brainfuse, omfit_classes.omfit_ascii.OMFITascii

train_net(dB, inputNames=[], outputNames=[], max_iterations=100, hidden_layer=None, connection_rate=1.0, noise=0.0, fraction=0.5, norm_output={}, output_mean_0=False, robust_stats=0.0, weight_decay=0.1, spring_decay=1.0, activation_function='SIGMOID_SYMMETRIC')[source]
Parameters
  • dB – dictionary of input/output arrays

  • inputNames – train on these inputs (use all arrays not starting with OUT_ if not specified)

  • outputNames – train on these outputs (use all arrays starting with OUT_ if not specified)

  • max_iterations – >0 max number of iterations <0 max number of iterations without improvement

  • hidden_layer – list of integers defining the NN hidden layer topology

  • connection_rate – float from 0. to 1. defining the density of the synapses

  • noise – add gaussian noise to training set

  • fraction – fraction of data used for training (the rest being for validation) fraction>0 fraction random splitting -1<fraction<0 fraction sequential splitting fraction>1 index sequential splitting

  • norm_output – normalize outputs

  • output_mean_0 – force average of normalized outputs to have 0 mean

  • robust_stats – 0<x<100 percentile of data to be considered =0 mean and std <0 median and mad

  • weight_decay – exponential forget of the weight

  • spring_decay – link training weight decay to the validation error

Returns

std_out of training process

class omfit_classes.omfit_brainfuse.brainfuse(filename, **kw)[source]

Bases: omfit_classes.brainfuse.libfann.neural_net

denormOutputs(inputs, outputs)[source]
normOutputs(inputs, outputs, denormalize=False)[source]
activate(dB, verbose=False)[source]
save()[source]
load()[source]
omfit_classes.omfit_brainfuse.activateNets(nets, dB)[source]
Parameters
  • nets – dictionary with OMFITbrainfuse objects (or path where to load NNs from)

  • dB – dictionary with entries to run on

Returns

tuple with (out,sut,targets,nets,out_)

omfit_classes.omfit_brainfuse.activateNetsFile(nets, inputFile, targetFile=None)[source]
Parameters
  • nets – dictionary with OMFITbrainfuse objects (or path where to load NNs from)

  • inputFile – ASCII file where to load the inputs to run the NN

  • targetFile – ASCII file where to load the targets for validating the NN

Returns

tuple with (out,sut,targets,nets,out_)

omfit_classes.omfit_brainfuse.activateMergeNets(nets, dB, merge_nets)[source]

omfit_cdb

class omfit_classes.omfit_cdb.OMFITcdb(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict

load()[source]
class omfit_classes.omfit_cdb.OMFITcdbValue(**kw)[source]

Bases: object

fetch()[source]
plot()[source]
property help

omfit_chease

class omfit_classes.omfit_chease.OMFITchease(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

OMFIT class used to interface with CHEASE EXPEQ files

Parameters
  • filename – filename passed to OMFITascii class

  • **kw – keyword dictionary passed to OMFITascii class

load()[source]
save()[source]

Method used to save the content of the object to the file specified in the .filename attribute

Returns

None

static splineRZ(R, Z, Nnew)[source]

Auxilliary function to spline single boundary from EXPEQ

Parameters
  • R – array 1 (R coordinate)

  • Z – array 2 (Z coordinate)

  • Nnew – new number of points

Returns

smoothed R,Z

EQsmooth(keep_M_harmonics, inPlace=True, equalAngle=False, doPlot=False)[source]

smooth plasma boundary by zeroing out high harmonics

Parameters
  • keep_M_harmonics – how many harmonics to keep

  • inPlace – operate in place (update this file or not)

  • equalAngle – use equal angle interpolation, and if so, how many points to use

  • doPlot – plot plasma boundary before and after

Returns

smoothed R and Z coordinates

addLimiter(R, Z)[source]

Insertion of a wall defined by coordinates (R,Z)

:R = radial coordinate

:Z = vertical coordinate

Note: both must be normalized, and ndarray type

modifyBoundary()[source]

Interactively modify plasma boundary

from_gEQDSK(gEQDSK=None, conformal_wall=False, mode=None, rhotype=0, version=None, cocos=2)[source]

Modify CHEASE EXPEQ file with data loaded from gEQDSK

Parameters
  • gEQDSK – input gEQDKS file from which to copy the plasma boundary

  • conformal_wall – floating number that multiplies plasma boundary (should be >1)

  • mode – select profile to use from gEQDSK

  • rhotype – 0 for poloidal flux. 1 for toroidal flux. Only with version==’standard’

  • version – either ‘standard’ or ‘mars’

plot(bounds=None, **kw)[source]
Parameters
  • bounds

  • kw

Returns

class omfit_classes.omfit_chease.OMFITcheaseLog(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

OMFIT class used to parse the CHEASE log FILES for the following parameters: betaN, NW, CURRT, Q_EDGE, Q_ZERO, R0EXP, B0EXP, Q_MIN, S_Q_MIN, Q_95

Parameters
  • filename – filename passed to OMFITascii class

  • **kw – keyword dictionary passed to OMFITascii class

load()[source]

Load CHEASE log file data

read_VacuumMesh(nv)[source]

Read vacuum mesh from CHEASE log file :param nv: number of radial intervals in vacuum (=NV from input namelist)

class omfit_classes.omfit_chease.OMFITexptnz(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

OMFIT class used to interface with EXPTNZ files containing kinetic profiles

Parameters
  • filename – filename passed to OMFITascii class

  • **kw – keyword dictionary passed to OMFITascii class

load()[source]
exptnz2mars()[source]
class omfit_classes.omfit_chease.OMFITnuplo(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

CHEASE NUPLO output file

load()[source]

omfit_chombo

class omfit_classes.omfit_chombo.OMFITchombo(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

OMFIT class used to interface with CHOMBO input files

Parameters
  • filename – filename passed to OMFITascii class

  • **kw – keyword dictionary passed to OMFITascii class

load()[source]
save()[source]

The save method is supposed to be overridden by classes which use OMFITobject as a superclass. If left as it is this method can detect if .filename was changed and if so, makes a copy from the original .filename (saved in the .link attribute) to the new .filename

omfit_coils

class omfit_classes.omfit_coils.OMFITcoils(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

OMFITobject used to interface with coils ascii files used in FOCUS and STELLOPT codes.

Parameters

filename – filename passed to OMFITascii class

All additional key word arguments passed to OMFITascii

load()[source]

Load the file and parse it into a sorted dictionary

save()[source]

Write the data in standard ascii format

plot(ax=None, nd=3, cmap='coolwarm_r', vmin=None, vmax=None, colorbar=True, **kwargs)[source]

Plot the coils in 3D, colored by their current values.

Parameters
  • ax – Axes. Axis into which the plot will be made.

  • cmap – string. A valid matplolib color map name.

  • vmin – float. Minimum of the color scaling.

  • vmax – float. Maximum of the color scaling.

  • colorbar – bool. Draw a colorbar for the currents.

  • **kwargs – dict. All other key word arguments are passed to the mplot3d plot function.

Returns

list of Line3D objects. The line plotted for each coil.

to_OMFITfocuscoils(filename, nfcoil=8, target_length=0, coil_type=1, coil_symm=0, nseg=None, i_flag=1, l_flag=0)[source]

Convert to the OMFITfocuscoils ascii file format, which stores Fourier Harmonics of the coils. These files have additional settings, which can be set using key word arguments. Be sure to sync these with the focus input namelist!

Parameters
  • filename – str. Name of new file

  • nfcoil – int. Number of harmonics in decomposition

  • target_length – float. Target length of coils. Zero targets the initial length.

  • coil_type – int.

  • coil_symm – int. 0 for no symmetry. 1 for toroidal symmetry matching the boundary bnfp.

  • nseg – int. Number of segments. Default (None) uses the number of segments in the original file

  • i_flag – int.

  • l_flag – int.

Returns

OMFITcoils

to_OMFITGPECcoils(filename)[source]

Convert OMFITcoils to the OMFITGPECcoils ascii file format, which writes x,y,z points but no current information.

Parameters

filename – str. Name of new file

Returns

OMFITGPECcoils

class omfit_classes.omfit_coils.OMFITfocuscoils(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

OMFITobject used to interface with focus ascii files used to describe coils in native FOCUS decomposition.

Parameters

filename – filename passed to OMFITascii class

All additional key word arguments passed to OMFITascii

load()[source]

Load the file and parse it into a sorted dictionary

save()[source]

Write the data in standard ascii format

plot(bnfp=1, ax=None, cmap='coolwarm_r', vmin=None, vmax=None, colorbar=True, **kwargs)[source]

Plot the coils in 3D, colored by their current values.

Parameters
  • bnfp – int. Toroidal mode number of symmetry used if coil_symm is 1

  • ax – Axes. Axis into which the plot will be made.

  • cmap – string. A valid matplolib color map name.

  • vmin – float. Minimum of the color scaling.

  • vmax – float. Maximum of the color scaling.

  • colorbar – bool. Draw a colorbar for the currents.

  • **kwargs – dict. All other key word arguments are passed to the mplot3d plot function.

Returns

list of Line3D objects. The line plotted for each coil.

to_OMFITcoils(filename, bnfp=1)[source]

Convert to the standard OMFITcoils ascii file format, which stores x,y,z,i points by inverse Fourier transforming the FOCUS coil decompositions.

Parameters
  • filename – str. Name of new file

  • bnfp – int. Toroidal mode number of symmetry used if coil_symm is 1

  • gpec_format – bool. Convert to OMFITGPECcoils ascii formatting instead

Returns

OMFITcoils or OMFITGPECcoils

to_OMFITGPECcoils(filename)[source]

Convert OMFITcoils to the OMFITGPECcoils ascii file format, which writes x,y,z points but no current information.

Parameters
  • filename – str. Name of new file

  • bnfp – int. Toroidal mode number of symmetry used if coil_symm is 1

Returns

OMFITGPECcoils

class omfit_classes.omfit_coils.OMFITGPECcoils(*args, **kwargs)[source]

Bases: omfit_classes.omfit_coils.OMFITcoils

Class that reads ascii GPEC .dat coil files and converts them to the standard coil file format used by the FOCUS and STELLOPT codes.

NOTE: This will clobber the original ascii formatting!! Do not connect to original file paths!!

load()[source]

Load the file and parse it into a sorted dictionary

save()[source]

Write the data in standard ascii format

to_OMFITcoils(filename)[source]

Convert to the standard OMFITcoils ascii file format, which stores x,y,z,i points.

Parameters

filename – str. Name of new file

Returns

OMFITcoils

omfit_cotsim

class omfit_classes.omfit_cotsim.OMFITcotsim(*args, **kwargs)[source]

Bases: omfit_classes.omfit_matlab.OMFITmatlab

to_omas(ods=None, time_index=None)[source]

omfit_csv

class omfit_classes.omfit_csv.OMFITcsv(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

OMFIT class used to interface with Comma Separated Value files

This class makes use of the np.genfromtxt and np.savetxt methods http://docs.scipy.org/doc/np/user/basics.io.genfromtxt.html

Parameters
  • filename – filename passed to OMFITascii class

  • delimiter – delimiter character that is used to separate the values

  • comments – charachter or list of characters that will be recognized as comments

  • fmt – format used to save the data to file

  • missing_values – The set of strings corresponding to missing data

  • filling_values – The set of values to be used as default when the data are missing

  • **kw – keyword dictionary passed to OMFITascii class

load()[source]

Method used to load the content of the file specified in the .filename attribute

Returns

None

save()[source]

Method used to save the content of the object to the file specified in the .filename attribute

Returns

None

omfit_ctrl_analysis

Provides tools used for analyzing tokamak control systems

omfit_classes.omfit_ctrl_analysis.auto_scale(x, y)[source]

Given some data, pick a scale that would probably work nicely for most smoothing functions.

If the user doesn’t like it, they can provide a scale explicitly.

Parameters
  • x – float array Independent variable.

  • y – float array Dependent variable. We assume you want to smooth this and have it look nice.

Returns

float Scale for smoothing y(x) in same units as x.

omfit_classes.omfit_ctrl_analysis.mean_trapz(y, x=None, nan_policy='ignore')[source]

Average y using trapezoid rule and step spacing from x, if provided.

Can be used to effectively weight y by dx and cope with uneven x spacing.

Parameters
  • y – float array Dependent variable to average

  • x – float array [optional] Independent variable corresponding to y. If not provided, will assume even spacing.

  • nan_policy – string ‘ignore’ or ‘propagate’

Returns

float Average value of y using trapezoid integration and accounting for potential uneven x spacing

class omfit_classes.omfit_ctrl_analysis.ControlAnalysis(**kw)[source]

Bases: collections.OrderedDict

Contains helper functions for analyzing control problems.

Parameters
  • topic – string [optional] Topic of the control analysis, like “Detachment control”

  • device – string [optional] Which tokamak or device is being analyzed? Like ‘DIII-D’

  • shot – int [optional] Which shot is being analyzed?

smoother_options = ['smooth_by_convolution', 'smooth', 'butter_smooth', 'median_filter', 'trismooth']
describe_case(topic=None, device=None, shot=None)[source]

Form a string describing the case being analyzed using user-supplied data

update_parent_module_settings(module_root=None)[source]

Uses the settings in this class instance to update settings in the parent module to force consistency

This is like the reverse of [‘SCRIPTS’][‘general’][‘control_analysis_setup’] in the PCS module.

Parameters

module_root – OMFITmodule instance [optional] You can manually provide this in case there’s trouble identifying it automatically, which is unlikely.

Returns

bool Was an update done?

load_results_to_rdb(orders=None, tag=None, overwrite=0, test_data=None)[source]

Loads results to the RDB

Parameters
  • orders – list of ints Fit orders to consider

  • tag – string ‘sys’ or ‘cq’ for base system identification or control quality evaluation

  • overwrite

    int or bool Passed to OMFITrdb.update(), where it has the following effects:

    • 0/False: abort loading any results if anything would be overwritten

    • 1/True: ignore pre-existing results and try to load regardless of what would be overwritten

    • 2: avoid overwriting by popping keys out of new data if they are found with non-None in existing data

  • test_data – dict [optional] Used for testing: if a dictionary is supplied, its keys should match columns in the RDB.

Returns

OMFITrdb instance

estimate_uncertainty(reference, min_frac_err=0.05, max_frac_err=0.35)[source]

Estimates uncertainty in some quantity indicated by reference. The primary method relies on assuming that high frequency variation is noise that can be described as random measurement error or uncertainty. This requires a definition of “high” frequency, which is based on the system smoothing timescale.

Parameters
  • reference – string The data to be analyzed are x, y = self[‘history’][‘x’], self[‘history’][reference]

  • min_frac_err – float Minimum error in y as a fraction of y

  • max_frac_err – float Maximum error in y as a fraction of y

Returns

float array Uncertainty in self[‘history’][reference], with matching shape and units.

p_first_order(par, x, u, uniform_dx=None)[source]

Calculates expected response to a given target waveform using a first order plus dead time (FOPDT) model

Parameters
  • par – Parameters instance Fit parameters

  • x – float array independent variable

  • u – float array command or actuator history as a function of x

  • uniform_dx – bool Evenly spaced x?

Returns

float array Expected response vs. time

first_order(x, u, y0, gain, lag, scale, d_gain_du=0, ex=0, u0=None)[source]

Calculates expected response to a given target waveform using a first order plus dead time (FOPDT) model

Parameters
  • x – float array independent variable

  • u – float array command or actuator history as a function of x

  • y0 – float Initial y value (y units)

  • gain – float Gain = delta_y / delta_u: how much will response variable change given a change in command? Units = y units / u units

  • lag – float How long before a change in command begins to cause a change in response? (x units)

  • scale – float Timescale for response (x units)

  • d_gain_du – float Change in gain per change in u away from u0. This is normally exactly 0. Should you really change it?

  • ex – float Exponent ex in transformation Y = y * (y/y0)**ex. Transforms output y after running model. It is a modification of the standard model. This seemed like it was worth a shot, but it didn’t do me any good.

  • u0 – float Reference u value (defaults to u[0])

Returns

float array Expected response vs. time

p_second_order(par, x, u)[source]

Version of second_order() using Parameters

second_order(x, u, y0, gain, lag, scale, damping, d_gain_du=0, ex=0, u0=None, uniform_dx=None)[source]

Calculates expected response to a given target waveform using a second order plus dead time (SOPDT) model

Parameters
  • x – float array independent variable

  • u – float array command or actuator history as a function of x

  • y0 – float Initial y value (y units)

  • gain – float Gain = delta_y / delta_u: how much will response variable change given a change in command? Units = y units / u units

  • lag – float How long before a change in command begins to cause a change in response? (x units)

  • scale – float Timescale for response (x units)

  • damping – float unitless

  • d_gain_du – float Change in gain per change in u away from u0. This is normally exactly 0. Should you really change it?

  • ex – float Exponent X for transforming to Y in in Y = y (y/y0)^X

  • u0 – float Reference value of u. Defaults to u[0].

  • uniform_dx – bool x is evenly spaced

Returns

float array Expected response vs. time

third_order(x, u, y0, gain, lag, scale, damping, a3, d_gain_du=0, ex=0, u0=None, uniform_dx=None)[source]

Calculates expected response to a given target waveform using a third order plus dead time (TOPDT) model

Where I made up the third order expression and the exact implementation may not be standard. Because this isn’t confirmed as a standard sys id model, it is not recommended for use.

Parameters
  • x – float array independent variable

  • u – float array command or actuator history as a function of x

  • y0 – float Initial y value (y units)

  • gain – float Gain = delta_y / delta_u: how much will response variable change given a change in command? Units = y units / u units

  • lag – float How long before a change in command begins to cause a change in response? (x units)

  • scale – float Timescale for response (x units)

  • damping – float unitless

  • a3 – float unitless factor associated with third order term

  • d_gain_du – float Change in gain per change in u away from u0. This is normally exactly 0. Should you really change it?

  • ex – float Exponent X for transforming to Y in in Y = y (y/y0)^X

  • u0 – float Reference value of u. Defaults to u[0].

  • uniform_dx – bool x is evenly spaced

Returns

float array Expected response vs. time

p_third_order(par, x, u)[source]

Version of third_order() using Parameters

make_first_order_guess(x, u, y)[source]

Guesses parameters to use in the first order model

Parameters
  • x – 1d float array independent variable

  • u – 1d float array command or target as a function of x

  • y – 1d float array response as a function of x

Returns

list of 4 floats Guesses for y0, gain, lag, and scale

make_guess(x, y, u, order, guess=None, variable_gain=False, modified_y=False)[source]

Guesses fit parameters

Parameters
  • x – float array

  • y – float array

  • u – float array

  • order – int

  • guess – list of at least (2 + order) numbers

  • variable_gain – bool This is NOT a standard option. Should you really touch it?

  • modified_y – bool Transforms Y = y * (y/y_0)**ex, where ex is a new parameter

Returns

Parameters instance

model_residual(par, x, u, y, y_err=None, order=None, just_model=False)[source]

Prepares residual between data and model given some parameters, for use in minimization/fitting

fit_response_model(x, y, u, y_err=None, order=2, guess=None, name=None, npts=None, method=None, **kw)[source]

Fits response /output data to actuator / command / input data

Parameters
  • x – float array Independent variable

  • y – float array Output or response data

  • u – float array Input data

  • y_err – float array [optional] Uncertainty in y, matching units and dimensions of y. If omitted, it’s assumed to be 1.

  • order – int Order of the model, such as 1 or 2

  • guess – list of floats [optional] Initial guesses for y0, gain, lag, timescale, and damping (2nd order only)

  • name – string Tag name where detailed results are placed

  • npts – int Downsample data if necessary to avoid fitting more than this many points

  • method – string Minimization method, like ‘nelder’

  • kw

    Additional keywords nan_policy: string

    Instructions for handling NaNs in model function output, like ‘raise’, ‘propagate’, or ‘omit’

    xunits: string yunits: string uunits: string cmd: bool

    u is a command with potentially different units and scale than y, as opposed to a target, which should be similar to y.

    variable_gain: bool

    Gain is a linear function of command-command_0 instead of a constant. This is NOT standard.

    modified_y: bool

    Transforms Y = y*(y/y_0)^(ex), where ex is an additional parameter

Returns

lmfit minimzer result

calculate_time_domain_specs_order_2()[source]

Calculates control performance specifications in the time domain; valid for order 2 fits

get_plot_data(order=None, x=None, y=None, u=None, y_err=None, **kw)[source]

Sets up data for fit plot

Returns

tuple containing x, y, u (float arrays) y_err (float array or None) xunits, yunits, uunits (strings) name (string) cmd, annotate (bools) my, gy (float arrays) order, ishot (int)

plot_fit_response(order=None, x=None, y=None, u=None, y_err=None, extra_data=None, split=None, **kw)[source]

Plots fit to response model

Parameters
  • order – int Order of the model to use. Must be already complete.

  • x – float array Independent variable 1D: single shot 2D: multi-shot

  • y – float array Response data as a function of x

  • u – float array Target or command data as a function of x

  • y_err – float array [optional] Uncertainty in y, matching shape and units of y

  • extra_data

    list of dicts [optional] Each item contains:

    value: float array (required). Must match dimensions of x or supply its own xx xx: dependent variable associated with value. Required if value isn’t the same shape as x axes: ‘y’ or ‘u’, denoting whether quantity should be grouped w response or cmd/target (optional) label: string (optional) color: matplotlib color spec (optional) linestyle: matplotlib line style spec (optional) plotkw: dict: addtional plot keywords (optional)

  • split – bool When plotting multiple shots, don’t overlay them; make more subplots so each can be separate. Set to none to split when twod, and leave alone when only one shot is considered.

  • kw

    Additional keywords show_guess: bool [optional]

    Switches display of guesses on/off. Defaults to on for single shot mode and off for multi-shot mode.

    show_model: bool

    Switches display of fit model on/off. Defaults to on. One might want to turn it off if one needs to customize just this trace with a separate plot command later.

    xunits: string yunits: string uunits: string cmd: bool

    Treat u as a command with different units and scale than y, as opposed to a target, which should be comparable to y. Defaults to reading from the fit output.

    name: string

    Tag name of fit output

    annotate: bool

    Mark annotations on the plot. Ignored if X is 2D unless ishot is used to select a single shot.

    ishot: int or None

    If int, selects which column in multi-shot/2D input to plot. If None, plots all shots/column. Ignored if x is 1D. This is an index from 0, not an actual literal shot number.

Returns

(Figure instance, 2d array of Axes instances)

auto_cornernote(ax=None, no_multishot=False, ishot=None)[source]

Applies labels to a plot using user-supplied information about the case being analyzed

If information is missing, labels may be skipped.

Parameters
  • ax – Axes instance These axes will be given a title reflecting the topic under investigation, if it is known

  • no_multishot – bool Suppress shot if it’s an iterable

  • ishot – int or None Select just one shot when the instance is using multiple

printq(*args)[source]

Shortcut for printd with a centrally controlled and consistent topic

fit_report(orders=None, notes=None)[source]

Prints a report with key scalar metrics

complain(quiet=False, no_good_news=True)[source]

Prints or returns problems from self[‘__details__’][‘problems’], sorted by severity :param quiet: bool

Don’t actually print, just return the formatted report as a string

Parameters

no_good_news – bool No print unless there’s a problem to complain about

Returns

string The formatted report, which is printed by default

class omfit_classes.omfit_ctrl_analysis.SystemIdentification(x=None, response=None, command=None, response_uncertainty=None, target=None, order=2, **kw)[source]

Bases: omfit_classes.omfit_ctrl_analysis.ControlAnalysis

Manages fits to control responses to commands to identify system parameters

Parameters
  • x

    float array Independent variable; can be used for weighting steps in case of uneven spacing of data Single shot mode: x should be 1D Multi-shot mode: x should be 2D, with x[:, i] being the entry for shot i.

    If some shots have shorter x than others, pad the end with NaNs to get consistent length. Variables like gain, lag, and scale will be fit across all shots together assuming that the same system is being studied in each.

  • response – float array matching shape of x Dependent variable as a function of x. These are actual measurements of the quantity that the system tried to control.

  • command – float array matching shape of x Dependent variable as a function of x This is the command that was used to try to produce a response in y

  • response_uncertainty – float array [optional] Uncertainty in response, matching dimensions and units of response. Defaults to 1.

  • order – int

  • time_range – two element numeric iterable in x units Used to control gathering of data if x, response, command, and target aren’t supplied directly. If x units are converted using ???_x_factor, specify time_range in final units after conversion.

  • command, target, enable]_pointname ([response,) –

    string Alternative to supplying x, y data for response, command, or target. Requires the following to be defined:

    device shot time_range

    Also considers the following, if supplied:

    ???_treename ???_x_factor ???_y_factor

    Where ??? is one of [response, command, target]

  • command, target, enable]_treename ([response,) – string or None string: MDSplus tree to use as data source for [response, command, or target] None: Use PTDATA

  • command, target, enable]_x_factor ([response,) – float Multiply x data by this * overall_x_factor. All factors are applied immediately after gathering from MDSplus and before performing any operations or comparisons. All effective factors are the product of an individual factor and an overall factor. So, response x data are mds.dim_of(0) * response_x_factor * overall_x_factor

  • enable_value – float If supplied, data gathered using enable_pointname and enable_treename must be equal to this value in order for the command to be non-zero.

  • command, target, enable]_y_factor ([response,) – float

  • smooth_response – bool Smooth response data before further processing. Helpful if signal is polluted with irrelevant high frequency noise.

  • overall_x_factor – float

  • overall_y_factor – float

  • response_units – string

  • command_units – string

  • time_units – string Default order of the control model to use. 1 or 2 for now. Can override later when calling fit.

  • kw – Keywords passed to secondary setup methods

fit(order=None, guess=None, **kw)[source]

Fit measured response data to modeled response to a given command :param order: int

Order of the model: 1 or 2

Parameters
  • guess – list of numbers Guesses for y0, gain, lag, scale, and (2nd order only) damping

  • kw

    additional keywords passed to fit_response_model: variable_gain: bool

    False: STANDARD! USE THIS UNLESS YOU’RE OUT OF OPTIONS! Gain is a constant True: gain is a linear function of command - command_0

Returns

Minimizer result

plot_input_data(show_target=True, fig=None)[source]

Plots data from history, such as command and response

plot(show_target=True, **kw)[source]

Wrapper for ControlAnalysis.plot_fit_response() that can add extra data to include the target

Parameters
  • show_target – bool Add target as extra data

  • kw – Additional keywords passed to plot_fit_response()

Returns

(Figure instance, array of Axes instances)

report(orders=None)[source]

Prints results of fits.

Parameters

orders – list [optional] Sets which fit results are selected. [1, 2] prints order 1 and order 2 fit results.

class omfit_classes.omfit_ctrl_analysis.ControlQuality(x=None, measurement=None, target=None, measurement_uncertainty=None, xwin=None, **kw)[source]

Bases: omfit_classes.omfit_ctrl_analysis.ControlAnalysis

Calculates metrics for control quality

Parameters
  • x – float array Independent variable; can be used for weighting steps in case of uneven spacing of data

  • measurement – float array Dependent variable as a function of x. These are actual measurements of the quantity that the system tried to control.

  • target – float or float array Dependent variable as a function of x, or a scalar value. This is the target for measurement. Perfect control would be measurement = target at every point.

  • measurement_uncertainty – float array [optional] Uncertainty in measurement in matching dimensions and units. If not supplied, it will be estimated.

  • enable – float array [optional] Enable switch. Don’t count activity when control is disabled. Disabled means when the enable switch doesn’t match enable_value.

  • command – float array [optional] Command to the actuator. Some plots will be skipped if this is not provided, but most analysis can continue.

  • xwin – list of two-element numeric iterables in units of x Windows for computing some control metrics, like integral_square_error. You can use this to highlight some step changes in the target and get responses to different steps.

  • units – string Units of measurement and target

  • command_units – string [optional] Units of command, if supplied. Used for display.

  • time_units – string Units of x Used for display, so quote units after applying overall_x_factor or measurement_x_factor, etc.

  • time_range – two element numeric iterable in x units Used to control gathering of data if x, response, command, and target aren’t supplied directly. If x units are converted using ???_x_factor, specify time_range in final units after conversion.

  • target, measurement_uncertainty, enable, command]_pointname ([measurement,) –

    string Alternative to supplying x, y data for measurement, target, measurement_uncertainty, enable, or command. Requires the following to be defined:

    device shot time_range

    Also considers the following, if supplied:

    ???_treename ???_x_factor ???_y_factor

    Where ??? is one of [measurement, target, measurement_uncertainty]

  • target, enable, command]_treename ([measurement,) – string or None string: MDSplus tree to use as data source for measurement or target None: Use PTDATA

  • target, enable, command]_x_factor ([measurement,) – float Multiply x data by this * overall_x_factor. All factors are applied immediately after gathering from MDSplus and before performing any operations or comparisons. All effective factors are the product of an individual factor and an overall factor. So, target x data are mds.dim_of(0) * target_x_factor * overall_x_factor

  • target, enable, command]_y_factor ([measurement) – float

  • enable_value – float enable must match this value for data to count toward metrics

  • overall_x_factor – float

  • overall_y_factor – float

  • kw – Keywords passed to secondary setup methods

make_effective_target()[source]

Assumes that turning on the control counts as changing the target and makes modifications

Prepends history with a step where the target matches the initial measured value. Then if the measurement is initially about 12 eV and the control is turned on with a target of 5 eV, it will act like the target changed from 12 to 5.

calculate()[source]

Runs calculations for evaluating control quality

calculate_standard_metrics()[source]

Manages calculations of standard metrics and organizes results

calculate_tracking_metrics()[source]

Tries to quantify how bad the controller is with moving targets

fit(order=2, guess=None, force=False)[source]

Fits response (measurement) data to model response to change in target

Parameters
  • order – int Order of the model. Supported: 1 and 2

  • guess – list of floats Guesses / initial values for y0, gain, lag, scale, and damping (2nd order only)

  • force – bool Try to fit even if it seems like a bad idea

Returns

minimizer result

calculate_fit_based_secondary_metrics(orders=None)[source]

Some models have formulae for calculating derived properties and control metrics

report(orders=None)[source]

Prints a report with key scalar metrics

plot(time_range=None)[source]

Plots key control quality data

subplot_main_quantities(ax, time_range=None, **kw)[source]

Plots primary quantities like measurement and target

Parameters
  • ax – Axes instance

  • time_range – sorted two element numeric iterable [optional] Zoom into only part of the data by specifying a new time range in xunits (probably seconds). If not provided, the default time_range will be used.

  • kw

    extra keywords passed to pyplot.plot() Recommended: instead of setting color here, set

    self[‘__plot__’][‘target_color’] and self[‘__plot__’][‘measurement_color’]

    Recommended: do not set label, as it will affect both the target and measurement

subplot_errors(ax, norm=False, time_range=None, quote_avg=True, **kw)[source]

Plots error between measurement and target

Parameters
  • ax – Axes instance

  • norm – bool Normalize quantities based on typical independent and dependent data scales or intervals. The norm factors are in self[‘summary’][‘norm’] and self[‘summary’][‘xnorm’]

  • time_range – sorted two element numeric iterable [optional] Zoom into only part of the data by specifying a new time range in xunits (probably seconds). If not provided, the default time_range will be used.

  • quote_avg – bool Include all-time average in a legend label

  • kw – extra keywords passed to pyplot.plot() Recommended: don’t set label as it will override both series drawn by this function Note: if alpha is set and numeric, the unsmoothed trace will have alpha *= 0.3

subplot_control_metrics(ax, norm=False, time_range=None, **kw)[source]

Plots classic control metrics vs time

Parameters
  • ax – Axes instance

  • norm – bool

  • time_range – sorted two element numeric iterable [optional] Zoom into only part of the data by specifying a new time range in xunits (probably seconds). If not provided, the default time_range will be used.

  • kw – additional keywords passed to plot Recommended: do not set color this way. Instead, set the list self[‘__plot__’][‘other_error_colors’] Recommended: do not set label this way. It will override all three traces’ labels.

subplot_pre_correlation(ax, norm=True, time_range=None, **kw)[source]

Plot derivatives and stuff that goes into correlation tests

Parameters
  • ax – Axes instance

  • norm – bool

  • time_range – sorted two element numeric iterable [optional] Zoom into only part of the data by specifying a new time range in xunits (probably seconds). If not provided, the default time_range will be used.

  • kw – extra keywords passed to plot Recommended: don’t set label or color Alpha will be multiplied by 0.3 for some plots if it is numeric

subplot_correlation(ax, which='target', norm=True, time_range=None, **kw)[source]

Plots error vs. other quantities, like rate of change of target

Parameters
  • ax – Axes instance

  • which – string Controls the quantity on the x axis of the correlation plot ‘target’ has a special meaning otherwise, it’s d(target)/dx

  • norm – bool

  • time_range – sorted two element numeric iterable [optional] Zoom into only part of the data by specifying a new time range in xunits (probably seconds). If not provided, the default time_range will be used.

  • kw – Additional keywords passed to pyplot.plot() Recommended: Don’t set marker as it is set for both series If alpha is numeric, it will be multiplied by 0.3 for some data but not for others

subplot_command(ax, time_range=None, **kw)[source]

Plots the actual command to the actuator vs. time

Parameters
  • ax – Axes instance

  • time_range – sorted two element numeric iterable [optional] Zoom into only part of the data by specifying a new time range in xunits (probably seconds). If not provided, the default time_range will be used.

  • kw – Additional keywords passed to pyplot.plot() Recommended: Don’t set label as it will affect both series If it is numeric, alpha will be multiplied by 0.3 for some data but not others

omfit_classes.omfit_ctrl_analysis.sanitize_key(key)[source]

Replaces illegal characters in a string so it can be used as a key in the OMFIT tree

Parameters

key – string

Returns

string

omfit_classes.omfit_ctrl_analysis.remove_periodic_noise(x, y, baseline_interval, amp_threshold=0.1, min_freq='auto', max_freq=None, debug=False)[source]

Tries to remove periodic noise from a signal by characterizing a baseline interval and extrapolating

  1. FFT signal during baseline interval, when there should be no real signal (all noise)

  2. Cut off low amplitude parts of the FFT and those outside of min/max freq

  3. Find frequencies where the baseline has high amplitude

  4. Suppress frequency components that appear prominently in the baseline

Parameters
  • x – 1D float array

  • y – 1D float array

  • baseline_interval – two element numeric iterable Should give start and end of time range / x range to be used for noise characterization. Both ends must be within the range of X.

  • amp_threshold – float Fraction of peak FFT magnitude (not counting 0 frequency) to use as a threshold. FFT components with magnitudes below this will be discarded.

  • min_freq – float [optional] Also remove low frequencies while cleaning low amplitude components out of the FFT

  • max_freq – float [optional] Also remove high frequencies while cleaning low amplitude components out of the FFT

  • debug – bool Returns intermediate quantities along with the primary result

Returns

1D float array or dictionary If debug is False: 1D array

y, but with best estimate for periodic noise removed

If debug is True: dict

omfit_data

omfit_classes.omfit_data.reindex_interp(data, method='linear', copy=True, interpolate_kws={'fill_value': nan}, **indexers)[source]

Conform this object onto a new set of indexes, filling in missing values using interpolation. If only one indexer is specified, utils.uinterp1d is used. If more than one indexer is specified, utils.URegularGridInterpolator is used.

:params copybool, optional

If copy=True, the returned array’s dataset contains only copied variables. If copy=False and no reindexing is required then original variables from this array’s dataset are returned.

:params method{‘linear’}, optional

Method to use for filling index values in indexers not found on this data array: * linear: Linear interpolation between points

:params interpolate_kwsdict, optional

Key word arguments passed to either uinterp1d (if len(indexers)==1) or URegularGridInterpolator.

:params **indexersdict

Dictionary with keys given by dimension names and values given by arrays of coordinates tick labels. Any mis-matched coordinate values will be filled in with NaN, and any mis-matched dimension names will simply be ignored.

Returns

Another dataset array, with new coordinates and interpolated data.

See Also:

DataArray.reindex_like align

omfit_classes.omfit_data.reindex_conv(data, method='gaussian', copy=True, window_sizes={}, causal=False, interpolate=False, std_dev=2, **indexers)[source]

Conform this object onto a new set of indexes, filling values along changed dimensions using nu_conv on each in the order they are kept in the data object.

Parameters
  • copy – bool, optional If copy=True, the returned array’s dataset contains only copied variables. If copy=False and no reindexing is required then original variables from this array’s dataset are returned.

  • method – str/function, optional Window function used in nu_conv for filling index values in indexers.

  • window_sizes – dict, optional Window size used in nu_conv along each dimension specified in indexers. Note, no convolution is performed on dimensions not explicitly given in indexers.

  • causal – bool, optional Passed to nu_conv, where it forces window function f(x>0)=0.

  • **indexers – dict Dictionary with keys given by dimension names and values given by arrays of coordinate’s tick labels. Any mis-matched coordinate values will be filled in with NaN, and any mis-matched dimension names will simply be ignored.

  • interpolate – False or number Parameter indicating to interpolate data so that there are interpolate number of data points within a time window

  • std_dev – str/int. Accepted strings are ‘propagate’ or ‘none’. Future options will include ‘mean’, and ‘population’. Setting to an integer will convolve the error uncertainties to the std_dev power before taking the std_dev root.

Returns

DataArray Another dataset array, with new coordinates and interpolated data.

See Also:

DataArray.reindex_like align

omfit_classes.omfit_data.split_data(data, **indexers)[source]

Split the OMFITdataArray in two wherever the step in a given coordinate exceeds the specified limit.

Parameters

indexers – dict-like with key,value pairs corresponding to dimension labels and step size.

Example:

>> dat = OMFITdataArray([0.074,-0.69,0.32,-0.12,0.14],coords=[[1,2,5,6,7],dims=[‘time’],name=’random_data’) >> dats=dat.split(time=2)]) >> print(dats)

omfit_classes.omfit_data.smooth_data(data, window_size=11, window_function='hanning', axis=0)[source]

One dimensional smoothing. Every projection of the DataArray values in the specified dimension is passed to the OMFIT nu_conv smoothing function.

Parameters

axis (int,str (if data is DataArray or Dataset)) – Axis along which 1D smoothing is applied.

Documentation for the smooth function is below.

Convolution of a non-uniformly discretized array with window function.

The output values are np.nan where no points are found in finite windows (weight is zero). The gaussian window is infinite in extent, and thus returns values for all xo.

Supports uncertainties arrays. If the input –does not– have associated uncertainties, then the output will –not– have associated uncertainties.

Parameters
  • yi – array_like (…,N,…). Values of input array

  • xi – array_like (N,). Original grid points of input array (default y indicies)

  • xo – array_like (M,). Output grid points of convolution array (default xi)

  • window_size – float. Width of passed to window function (default maximum xi step). For the Gaussian, sigma=window_size/4. and the convolution is integrated across +/-4.*sigma.

  • window_function – str/function. Accepted strings are ‘hanning’,’bartlett’,’blackman’,’gaussian’, or ‘boxcar’. Function should accept x and window_size as arguments and return a corresponding weight.

  • axis – int. Axis of y along which convolution is performed

  • causal – int. Forces f(x>0) = 0.

  • interpolate – False or integer number > 0 Paramter indicating to interpolate data so that there are`interpolate` number of data points within a time window. This is useful in presence of sparse data, which would result in stair-case output if not interpolated. The integer value sets the # of points per window size.

  • std_dev – str/int Accepted strings are ‘none’, ‘propagate’, ‘population’, ‘expand’, ‘deviation’, ‘variance’. Only ‘population’ and ‘none’ are valid if yi is not an uncertainties array (i.e. std_devs(yi) is all zeros). Setting to an integer will convolve the error uncertainties to the std_dev power before taking the std_dev root. std_dev = ‘propagate’ is true propagation of errors (slow if not interpolating) std_dev = ‘population’ is the weighted “standard deviation” of the points themselves (strictly correct for the boxcar window) std_dev = ‘expand’ is propagation of errors weighted by w~1/window_function std_dev = ‘deviation’ is equivalent to std_dev=1 std_dev = ‘variance’ is equivalent to std_dev=2

Returns

convolved array on xo

>> M=300 >> ninterp = 10 >> window=[‘hanning’,’gaussian’,’boxcar’][1] >> width=0.05 >> f = figure(num=’nu_conv example’) >> f.clf() >> ax = f.use_subplot(111) >> >> xo=np.linspace(0,1,1000) >> >> x0=xo >> y0=(x0>0.25)&(x0<0.75) >> pyplot.plot(x0,y0,’b-‘,label=’function’) >> >> x1=np.cumsum(rand(M)) >> x1=(x1-min(x1))/(max(x1)-min(x1)) >> y1=(x1>0.25)&(x1<0.75) >> pyplot.plot(x1,y1,’b.’,ms=10,label=’subsampled function’) >> if window==’hanning’: >> ys=smooth(interp1e(x0,y0)(xo),int(len(xo)*width)) >> pyplot.plot(xo,ys,’g-‘,label=’smoothed function’) >> yc=smooth(interp1e(x1,y1)(xo),int(len(xo)*width)) >> pyplot.plot(xo,yc,’m–’,lw=2,label=’interp subsampled then convolve’) >> >> y1=unumpy.uarray(y1,y1*0.1) >> a=time.time() >> yo=nu_conv(y1,xi=x1,xo=xo,window_size=width,window_function=window,std_dev=’propagate’,interpolate=ninterp) >> print(‘nu_conv time: {:}’.format(time.time()-a)) >> ye=nu_conv(y1,xi=x1,xo=xo,window_size=width,window_function=window,std_dev=’expand’,interpolate=ninterp) >> >> uband(x1,y1,color=’b’,marker=’.’,ms=10) >> uband(xo,yo,color=’r’,lw=3,label=’convolve subsampled’) >> uband(xo,ye,color=’c’,lw=3,label=’convolve with expanded-error propagation’) >> >> legend(loc=0) >> pyplot.ylim([-0.1,1.1]) >> pyplot.title(‘%d points , %s window, %3.3f width, interpolate %s’%(M,window,width, ninterp))

omfit_classes.omfit_data.exportDataset(data, path, complex_dim='i', *args, **kw)[source]

Method for saving xarray Dataset to NetCDF, with support for boolean, uncertain, and complex data Also, attributes support OMFITexpressions, lists, tuples, dicts

Parameters
  • data – Dataset object to be saved

  • path – filename to save NetCDF to

  • complex_dim – str. Name of extra dimension (0,1) assigned to (real, imag) complex data.

  • *args – arguments passed to Dataset.to_netcdf function

  • **kw – keyword arguments passed to Dataset.to_netcdf function

Returns

output from Dataset.to_netcdf function

ORIGINAL Dataset.to_netcdf DOCUMENTATION

Write dataset contents to a netCDF file.

pathstr, Path or file-like, optional

Path to which to save this dataset. File-like objects are only supported by the scipy engine. If no path is provided, this function returns the resulting netCDF file as bytes; in this case, we need to use scipy, which does not support netCDF version 4 (the default format becomes NETCDF3_64BIT).

mode{“w”, “a”}, default: “w”

Write (‘w’) or append (‘a’) mode. If mode=’w’, any existing file at this location will be overwritten. If mode=’a’, existing variables will be overwritten.

format{“NETCDF4”, “NETCDF4_CLASSIC”, “NETCDF3_64BIT”, “NETCDF3_CLASSIC”}, optional

File format for the resulting netCDF file:

  • NETCDF4: Data is stored in an HDF5 file, using netCDF4 API features.

  • NETCDF4_CLASSIC: Data is stored in an HDF5 file, using only netCDF 3 compatible API features.

  • NETCDF3_64BIT: 64-bit offset version of the netCDF 3 file format, which fully supports 2+ GB files, but is only compatible with clients linked against netCDF version 3.6.0 or later.

  • NETCDF3_CLASSIC: The classic netCDF 3 file format. It does not handle 2+ GB files very well.

All formats are supported by the netCDF4-python library. scipy.io.netcdf only supports the last two formats.

The default format is NETCDF4 if you are saving a file to disk and have the netCDF4-python library available. Otherwise, xarray falls back to using scipy to write netCDF files and defaults to the NETCDF3_64BIT format (scipy does not support netCDF4).

groupstr, optional

Path to the netCDF4 group in the given file to open (only works for format=’NETCDF4’). The group(s) will be created if necessary.

engine{“netcdf4”, “scipy”, “h5netcdf”}, optional

Engine to use when writing netCDF files. If not provided, the default engine is chosen based on available dependencies, with a preference for ‘netcdf4’ if writing to a file on disk.

encodingdict, optional

Nested dictionary with variable names as keys and dictionaries of variable specific encodings as values, e.g., {"my_variable": {"dtype": "int16", "scale_factor": 0.1, "zlib": True}, ...}

The h5netcdf engine supports both the NetCDF4-style compression encoding parameters {"zlib": True, "complevel": 9} and the h5py ones {"compression": "gzip", "compression_opts": 9}. This allows using any compression plugin installed in the HDF5 library, e.g. LZF.

unlimited_dimsiterable of hashable, optional

Dimension(s) that should be serialized as unlimited dimensions. By default, no dimensions are treated as unlimited dimensions. Note that unlimited_dims may also be set via dataset.encoding["unlimited_dims"].

compute: bool, default: True

If true compute immediately, otherwise return a dask.delayed.Delayed object that can be computed later.

invalid_netcdf: bool, default: False

Only valid along with engine="h5netcdf". If True, allow writing hdf5 files which are invalid netcdf as described in https://github.com/shoyer/h5netcdf.

omfit_classes.omfit_data.importDataset(filename_or_obj=None, complex_dim='i', *args, **kw)[source]

Method for loading from xarray Dataset saved as netcdf file, with support for boolean, uncertain, and complex data. Also, attributes support OMFITexpressions, lists, tuples, dicts

Parameters
  • filename_or_obj – str, file or xarray.backends.*DataStore Strings are interpreted as a path to a netCDF file or an OpenDAP URL and opened with python-netCDF4, unless the filename ends with .gz, in which case the file is gunzipped and opened with scipy.io.netcdf (only netCDF3 supported). File-like objects are opened with scipy.io.netcdf (only netCDF3 supported).

  • complex_dim – str, name of length-2 dimension (0,1) containing (real, imag) complex data.

  • *args – arguments passed to xarray.open_dataset function

  • **kw – keywords arguments passed to xarray.open_dataset function

Returns

xarray Dataset object containing the loaded data

ORIGINAL xarray.open_dataset DOCUMENTATION Open and decode a dataset from a file or file-like object.

filename_or_objstr, Path, file-like or DataStore

Strings and Path objects are interpreted as a path to a netCDF file or an OpenDAP URL and opened with python-netCDF4, unless the filename ends with .gz, in which case the file is gunzipped and opened with scipy.io.netcdf (only netCDF3 supported). Byte-strings or file-like objects are opened by scipy.io.netcdf (netCDF3) or h5py (netCDF4/HDF).

groupstr, optional

Path to the netCDF4 group in the given file to open (only works for netCDF4 files).

decode_cfbool, optional

Whether to decode these variables, assuming they were saved according to CF conventions.

mask_and_scalebool, optional

If True, replace array values equal to _FillValue with NA and scale values according to the formula original_values * scale_factor + add_offset, where _FillValue, scale_factor and add_offset are taken from variable attributes (if they exist). If the _FillValue or missing_value attribute contains multiple values a warning will be issued and all array values matching one of the multiple values will be replaced by NA. mask_and_scale defaults to True except for the pseudonetcdf backend.

decode_timesbool, optional

If True, decode times encoded in the standard NetCDF datetime format into datetime objects. Otherwise, leave them encoded as numbers.

concat_charactersbool, optional

If True, concatenate along the last dimension of character arrays to form string arrays. Dimensions will only be concatenated over (and removed) if they have no corresponding variable and if they are only used as the last dimension of character arrays.

decode_coordsbool or {“coordinates”, “all”}, optional

Controls which variables are set as coordinate variables:

  • “coordinates” or True: Set variables referred to in the 'coordinates' attribute of the datasets or individual variables as coordinate variables.

  • “all”: Set variables referred to in 'grid_mapping', 'bounds' and other attributes as coordinate variables.

engine{“netcdf4”, “scipy”, “pydap”, “h5netcdf”, “pynio”, “cfgrib”, “pseudonetcdf”, “zarr”}, optional

Engine to use when reading files. If not provided, the default engine is chosen based on available dependencies, with a preference for “netcdf4”.

chunksint or dict, optional

If chunks is provided, it is used to load the new dataset into dask arrays. chunks=-1 loads the dataset with dask using a single chunk for all arrays. chunks={}` loads the dataset with dask using engine preferred chunks if exposed by the backend, otherwise with a single chunk for all arrays. chunks='auto' will use dask auto chunking taking into account the engine preferred chunks. See dask chunking for more details.

lockFalse or lock-like, optional

Resource lock to use when reading data from disk. Only relevant when using dask or another form of parallelism. By default, appropriate locks are chosen to safely read and write files with the currently active dask scheduler.

cachebool, optional

If True, cache data loaded from the underlying datastore in memory as NumPy arrays when accessed to avoid reading from the underlying data- store multiple times. Defaults to True unless you specify the chunks argument to use dask, in which case it defaults to False. Does not change the behavior of coordinates corresponding to dimensions, which always load their data from disk into a pandas.Index.

drop_variables: str or iterable, optional

A variable or list of variables to exclude from being parsed from the dataset. This may be useful to drop variables with problems or inconsistent values.

backend_kwargs: dict, optional

A dictionary of keyword arguments to pass on to the backend. This may be useful when backend options would improve performance or allow user control of dataset processing.

use_cftime: bool, optional

Only relevant if encoded dates come from a standard calendar (e.g. “gregorian”, “proleptic_gregorian”, “standard”, or not specified). If None (default), attempt to decode times to np.datetime64[ns] objects; if this is not possible, decode times to cftime.datetime objects. If True, always decode times to cftime.datetime objects, regardless of whether or not they can be represented using np.datetime64[ns] objects. If False, always decode times to np.datetime64[ns] objects; if this is not possible raise an error.

decode_timedeltabool, optional

If True, decode variables and coordinates with time units in {“days”, “hours”, “minutes”, “seconds”, “milliseconds”, “microseconds”} into timedelta objects. If False, leave them encoded as numbers. If None (default), assume the same value of decode_time.

datasetDataset

The newly created dataset.

open_dataset opens the file with read-only access. When you modify values of a Dataset, even one linked to files on disk, only the in-memory copy you are manipulating in xarray is modified: the original file on disk is never touched.

open_mfdataset

class omfit_classes.omfit_data.OMFITncDataset(filename, lock=False, exportDataset_kw={}, data_vars=None, coords=None, attrs=None, **kw)[source]

Bases: omfit_classes.startup_framework.OMFITobject, omfit_classes.sortedDict.OMFITdataset

Class that merges the power of Datasets with OMFIT dynamic loading of objects

Parameters
  • filename – Path to file

  • lock – Prevent in memory changes to the DataArray entries contained

  • exportDataset_kw – dictionary passed to exportDataset on save

  • data_vars – see xarray.Dataset

  • coords – see xarray.Dataset

  • attrs – see xarray.Dataset

  • **kw – arguments passed to OMFITobject

set_lock(lock=True)[source]

Make all the DataArrays immutable and disable inplace updates

load()[source]

Loads the netcdf into memory using the importDataset function.

save(force_write=False, **kw)[source]

Saves file using system move and copy commands if data in memory is unchanged, and the exportDataset function if it has changed.

Saving NetCDF files takes much longer than loading them. Since 99% of the times NetCDF files are not edited but just read, it makes sense to check if any changes was made before re-saving the same file from scratch. If the files has not changed, than one can just copy the “old” file with a system copy.

Parameters
  • force_write – bool. Forces the (re)writing of the file, even if the data is unchanged.

  • **kw – keyword arguments passed to Dataset.to_netcdf function

class omfit_classes.omfit_data.OMFITncDynamicDataset(filename, **kw)[source]

Bases: omfit_classes.omfit_data.OMFITncDataset

Parameters
  • filename – Path to file

  • lock – Prevent in memory changes to the DataArray entries contained

  • exportDataset_kw – dictionary passed to exportDataset on save

  • data_vars – see xarray.Dataset

  • coords – see xarray.Dataset

  • attrs – see xarray.Dataset

  • **kw – arguments passed to OMFITobject

update_dynamic_keys(cls)[source]
keys()[source]
save(*args, **kw)[source]

Saves file using system move and copy commands if data in memory is unchanged, and the exportDataset function if it has changed.

Saving NetCDF files takes much longer than loading them. Since 99% of the times NetCDF files are not edited but just read, it makes sense to check if any changes was made before re-saving the same file from scratch. If the files has not changed, than one can just copy the “old” file with a system copy.

Parameters
  • force_write – bool. Forces the (re)writing of the file, even if the data is unchanged.

  • **kw – keyword arguments passed to Dataset.to_netcdf function

omfit_classes.omfit_data.pandas_read_json(path_or_buf=None, orient=None, typ='frame', dtype=None, convert_axes=None, convert_dates=True, keep_default_dates: bool = True, numpy: bool = False, precise_float: bool = False, date_unit=None, encoding=None, lines: bool = False, chunksize: Optional[int] = None, compression: Optional[Union[str, Dict[str, Any]]] = 'infer', nrows: Optional[int] = None, storage_options: Optional[Dict[str, Any]] = None)

Convert a JSON string to pandas object.

path_or_bufa valid JSON str, path object or file-like object

Any valid string path is acceptable. The string could be a URL. Valid URL schemes include http, ftp, s3, and file. For file URLs, a host is expected. A local file could be: file://localhost/path/to/table.json.

If you want to pass in a path object, pandas accepts any os.PathLike.

By file-like object, we refer to objects with a read() method, such as a file handle (e.g. via builtin open function) or StringIO.

orientstr

Indication of expected JSON string format. Compatible JSON strings can be produced by to_json() with a corresponding orient value. The set of possible orients is:

  • 'split' : dict like {index -> [index], columns -> [columns], data -> [values]}

  • 'records' : list like [{column -> value}, ... , {column -> value}]

  • 'index' : dict like {index -> {column -> value}}

  • 'columns' : dict like {column -> {index -> value}}

  • 'values' : just the values array

The allowed and default values depend on the value of the typ parameter.

  • when typ == 'series',

    • allowed orients are {'split','records','index'}

    • default is 'index'

    • The Series index must be unique for orient 'index'.

  • when typ == 'frame',

    • allowed orients are {'split','records','index', 'columns','values', 'table'}

    • default is 'columns'

    • The DataFrame index must be unique for orients 'index' and 'columns'.

    • The DataFrame columns must be unique for orients 'index', 'columns', and 'records'.

typ{‘frame’, ‘series’}, default ‘frame’

The type of object to recover.

dtypebool or dict, default None

If True, infer dtypes; if a dict of column to dtype, then use those; if False, then don’t infer dtypes at all, applies only to the data.

For all orient values except 'table', default is True.

Changed in version 0.25.0: Not applicable for orient='table'.

convert_axesbool, default None

Try to convert the axes to the proper dtypes.

For all orient values except 'table', default is True.

Changed in version 0.25.0: Not applicable for orient='table'.

convert_datesbool or list of str, default True

If True then default datelike columns may be converted (depending on keep_default_dates). If False, no dates will be converted. If a list of column names, then those columns will be converted and default datelike columns may also be converted (depending on keep_default_dates).

keep_default_datesbool, default True

If parsing dates (convert_dates is not False), then try to parse the default datelike columns. A column label is datelike if

  • it ends with '_at',

  • it ends with '_time',

  • it begins with 'timestamp',

  • it is 'modified', or

  • it is 'date'.

numpybool, default False

Direct decoding to numpy arrays. Supports numeric data only, but non-numeric column and index labels are supported. Note also that the JSON ordering MUST be the same for each term if numpy=True.

Deprecated since version 1.0.0.

precise_floatbool, default False

Set to enable usage of higher precision (strtod) function when decoding string to double values. Default (False) is to use fast but less precise builtin functionality.

date_unitstr, default None

The timestamp unit to detect if converting dates. The default behaviour is to try and detect the correct precision, but if this is not desired then pass one of ‘s’, ‘ms’, ‘us’ or ‘ns’ to force parsing only seconds, milliseconds, microseconds or nanoseconds respectively.

encodingstr, default is ‘utf-8’

The encoding to use to decode py3 bytes.

linesbool, default False

Read the file as a json object per line.

chunksizeint, optional

Return JsonReader object for iteration. See the line-delimited json docs for more information on chunksize. This can only be passed if lines=True. If this is None, the file will be read into memory all at once.

Changed in version 1.2: JsonReader is a context manager.

compression{‘infer’, ‘gzip’, ‘bz2’, ‘zip’, ‘xz’, None}, default ‘infer’

For on-the-fly decompression of on-disk data. If ‘infer’, then use gzip, bz2, zip or xz if path_or_buf is a string ending in ‘.gz’, ‘.bz2’, ‘.zip’, or ‘xz’, respectively, and no decompression otherwise. If using ‘zip’, the ZIP file must contain only one data file to be read in. Set to None for no decompression.

nrowsint, optional

The number of lines from the line-delimited jsonfile that has to be read. This can only be passed if lines=True. If this is None, all the rows will be returned.

New in version 1.1.

storage_optionsdict, optional

Extra options that make sense for a particular storage connection, e.g. host, port, username, password, etc., if using a URL that will be parsed by fsspec, e.g., starting “s3://”, “gcs://”. An error will be raised if providing this argument with a non-fsspec URL. See the fsspec and backend storage implementation docs for the set of allowed keys and values.

New in version 1.2.0.

Series or DataFrame

The type returned depends on the value of typ.

DataFrame.to_json : Convert a DataFrame to a JSON string. Series.to_json : Convert a Series to a JSON string.

Specific to orient='table', if a DataFrame with a literal Index name of index gets written with to_json(), the subsequent read operation will incorrectly set the Index name to None. This is because index is also used by DataFrame.to_json() to denote a missing Index name, and the subsequent read_json() operation cannot distinguish between the two. The same limitation is encountered with a MultiIndex and any names beginning with 'level_'.

>>> df = pd.DataFrame([['a', 'b'], ['c', 'd']],
...                   index=['row 1', 'row 2'],
...                   columns=['col 1', 'col 2'])

Encoding/decoding a Dataframe using 'split' formatted JSON:

>>> df.to_json(orient='split')
'{"columns":["col 1","col 2"],
  "index":["row 1","row 2"],
  "data":[["a","b"],["c","d"]]}'
>>> pd.read_json(_, orient='split')
      col 1 col 2
row 1     a     b
row 2     c     d

Encoding/decoding a Dataframe using 'index' formatted JSON:

>>> df.to_json(orient='index')
'{"row 1":{"col 1":"a","col 2":"b"},"row 2":{"col 1":"c","col 2":"d"}}'
>>> pd.read_json(_, orient='index')
      col 1 col 2
row 1     a     b
row 2     c     d

Encoding/decoding a Dataframe using 'records' formatted JSON. Note that index labels are not preserved with this encoding.

>>> df.to_json(orient='records')
'[{"col 1":"a","col 2":"b"},{"col 1":"c","col 2":"d"}]'
>>> pd.read_json(_, orient='records')
  col 1 col 2
0     a     b
1     c     d

Encoding with Table Schema

>>> df.to_json(orient='table')
'{"schema": {"fields": [{"name": "index", "type": "string"},
                        {"name": "col 1", "type": "string"},
                        {"name": "col 2", "type": "string"}],
                "primaryKey": "index",
                "pandas_version": "0.20.0"},
    "data": [{"index": "row 1", "col 1": "a", "col 2": "b"},
            {"index": "row 2", "col 1": "c", "col 2": "d"}]}'
class omfit_classes.omfit_data.DataFrame(data=None, index: Optional[Collection] = None, columns: Optional[Collection] = None, dtype: Optional[Union[ExtensionDtype, str, numpy.dtype, Type[Union[str, float, int, complex, bool, object]]]] = None, copy: bool = False)[source]

Bases: pandas.core.generic.NDFrame, pandas.core.arraylike.OpsMixin

Two-dimensional, size-mutable, potentially heterogeneous tabular data.

Data structure also contains labeled axes (rows and columns). Arithmetic operations align on both row and column labels. Can be thought of as a dict-like container for Series objects. The primary pandas data structure.

datandarray (structured or homogeneous), Iterable, dict, or DataFrame

Dict can contain Series, arrays, constants, dataclass or list-like objects. If data is a dict, column order follows insertion-order.

Changed in version 0.25.0: If data is a list of dicts, column order follows insertion-order.

indexIndex or array-like

Index to use for resulting frame. Will default to RangeIndex if no indexing information part of input data and no index provided.

columnsIndex or array-like

Column labels to use for resulting frame. Will default to RangeIndex (0, 1, 2, …, n) if no column labels are provided.

dtypedtype, default None

Data type to force. Only a single dtype is allowed. If None, infer.

copybool, default False

Copy data from inputs. Only affects DataFrame / 2d ndarray input.

DataFrame.from_records : Constructor from tuples, also record arrays. DataFrame.from_dict : From dicts of Series, arrays, or dicts. read_csv : Read a comma-separated values (csv) file into DataFrame. read_table : Read general delimited file into DataFrame. read_clipboard : Read text from clipboard into DataFrame.

Constructing DataFrame from a dictionary.

>>> d = {'col1': [1, 2], 'col2': [3, 4]}
>>> df = pd.DataFrame(data=d)
>>> df
   col1  col2
0     1     3
1     2     4

Notice that the inferred dtype is int64.

>>> df.dtypes
col1    int64
col2    int64
dtype: object

To enforce a single dtype:

>>> df = pd.DataFrame(data=d, dtype=np.int8)
>>> df.dtypes
col1    int8
col2    int8
dtype: object

Constructing DataFrame from numpy ndarray:

>>> df2 = pd.DataFrame(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]),
...                    columns=['a', 'b', 'c'])
>>> df2
   a  b  c
0  1  2  3
1  4  5  6
2  7  8  9

Constructing DataFrame from dataclass:

>>> from dataclasses import make_dataclass
>>> Point = make_dataclass("Point", [("x", int), ("y", int)])
>>> pd.DataFrame([Point(0, 0), Point(0, 3), Point(2, 3)])
    x  y
0  0  0
1  0  3
2  2  3
property axes

Return a list representing the axes of the DataFrame.

It has the row axis labels and column axis labels as the only members. They are returned in that order.

>>> df = pd.DataFrame({'col1': [1, 2], 'col2': [3, 4]})
>>> df.axes
[RangeIndex(start=0, stop=2, step=1), Index(['col1', 'col2'],
dtype='object')]
property shape

Return a tuple representing the dimensionality of the DataFrame.

ndarray.shape : Tuple of array dimensions.

>>> df = pd.DataFrame({'col1': [1, 2], 'col2': [3, 4]})
>>> df.shape
(2, 2)
>>> df = pd.DataFrame({'col1': [1, 2], 'col2': [3, 4],
...                    'col3': [5, 6]})
>>> df.shape
(2, 3)
to_string(buf: Optional[Union[PathLike[str], str, IO[str], io.RawIOBase, io.BufferedIOBase, io.TextIOBase, _io.TextIOWrapper, mmap.mmap]] = None, columns: Optional[Sequence[str]] = None, col_space: Optional[int] = None, header: Union[bool, Sequence[str]] = True, index: bool = True, na_rep: str = 'NaN', formatters: Optional[Union[List[Callable], Tuple[Callable, ], Mapping[Union[str, int], Callable]]] = None, float_format: Optional[Union[str, Callable, EngFormatter]] = None, sparsify: Optional[bool] = None, index_names: bool = True, justify: Optional[str] = None, max_rows: Optional[int] = None, min_rows: Optional[int] = None, max_cols: Optional[int] = None, show_dimensions: bool = False, decimal: str = '.', line_width: Optional[int] = None, max_colwidth: Optional[int] = None, encoding: Optional[str] = None)Optional[str][source]

Render a DataFrame to a console-friendly tabular output.

bufstr, Path or StringIO-like, optional, default None

Buffer to write to. If None, the output is returned as a string.

columnssequence, optional, default None

The subset of columns to write. Writes all columns by default.

col_spaceint, list or dict of int, optional

The minimum width of each column.

headerbool or sequence, optional

Write out the column names. If a list of strings is given, it is assumed to be aliases for the column names.

indexbool, optional, default True

Whether to print index (row) labels.

na_repstr, optional, default ‘NaN’

String representation of NaN to use.

formatterslist, tuple or dict of one-param. functions, optional

Formatter functions to apply to columns’ elements by position or name. The result of each function must be a unicode string. List/tuple must be of length equal to the number of columns.

float_formatone-parameter function, optional, default None

Formatter function to apply to columns’ elements if they are floats. This function must return a unicode string and will be applied only to the non-NaN elements, with NaN being handled by na_rep.

Changed in version 1.2.0.

sparsifybool, optional, default True

Set to False for a DataFrame with a hierarchical index to print every multiindex key at each row.

index_namesbool, optional, default True

Prints the names of the indexes.

justifystr, default None

How to justify the column labels. If None uses the option from the print configuration (controlled by set_option), ‘right’ out of the box. Valid values are

  • left

  • right

  • center

  • justify

  • justify-all

  • start

  • end

  • inherit

  • match-parent

  • initial

  • unset.

max_rowsint, optional

Maximum number of rows to display in the console.

min_rowsint, optional

The number of rows to display in the console in a truncated repr (when number of rows is above max_rows).

max_colsint, optional

Maximum number of columns to display in the console.

show_dimensionsbool, default False

Display DataFrame dimensions (number of rows by number of columns).

decimalstr, default ‘.’

Character recognized as decimal separator, e.g. ‘,’ in Europe.

line_widthint, optional

Width to wrap a line in characters.

max_colwidthint, optional

Max width to truncate each column in characters. By default, no limit.

New in version 1.0.0.

encodingstr, default “utf-8”

Set character encoding.

New in version 1.0.

str or None

If buf is None, returns the result as a string. Otherwise returns None.

to_html : Convert DataFrame to HTML.

>>> d = {'col1': [1, 2, 3], 'col2': [4, 5, 6]}
>>> df = pd.DataFrame(d)
>>> print(df.to_string())
   col1  col2
0     1     4
1     2     5
2     3     6
property style

Returns a Styler object.

Contains methods for building a styled HTML representation of the DataFrame.

io.formats.style.StylerHelps style a DataFrame or Series according to the

data with HTML and CSS.

items()Iterable[Tuple[Optional[Hashable], pandas.core.series.Series]][source]

Iterate over (column name, Series) pairs.

Iterates over the DataFrame columns, returning a tuple with the column name and the content as a Series.

labelobject

The column names for the DataFrame being iterated over.

contentSeries

The column entries belonging to each label, as a Series.

DataFrame.iterrowsIterate over DataFrame rows as

(index, Series) pairs.

DataFrame.itertuplesIterate over DataFrame rows as namedtuples

of the values.

>>> df = pd.DataFrame({'species': ['bear', 'bear', 'marsupial'],
...                   'population': [1864, 22000, 80000]},
...                   index=['panda', 'polar', 'koala'])
>>> df
        species   population
panda   bear      1864
polar   bear      22000
koala   marsupial 80000
>>> for label, content in df.items():
...     print(f'label: {label}')
...     print(f'content: {content}', sep='\n')
...
label: species
content:
panda         bear
polar         bear
koala    marsupial
Name: species, dtype: object
label: population
content:
panda     1864
polar    22000
koala    80000
Name: population, dtype: int64
iteritems()Iterable[Tuple[Optional[Hashable], pandas.core.series.Series]][source]

Iterate over (column name, Series) pairs.

Iterates over the DataFrame columns, returning a tuple with the column name and the content as a Series.

labelobject

The column names for the DataFrame being iterated over.

contentSeries

The column entries belonging to each label, as a Series.

DataFrame.iterrowsIterate over DataFrame rows as

(index, Series) pairs.

DataFrame.itertuplesIterate over DataFrame rows as namedtuples

of the values.

>>> df = pd.DataFrame({'species': ['bear', 'bear', 'marsupial'],
...                   'population': [1864, 22000, 80000]},
...                   index=['panda', 'polar', 'koala'])
>>> df
        species   population
panda   bear      1864
polar   bear      22000
koala   marsupial 80000
>>> for label, content in df.items():
...     print(f'label: {label}')
...     print(f'content: {content}', sep='\n')
...
label: species
content:
panda         bear
polar         bear
koala    marsupial
Name: species, dtype: object
label: population
content:
panda     1864
polar    22000
koala    80000
Name: population, dtype: int64
iterrows()Iterable[Tuple[Optional[Hashable], pandas.core.series.Series]][source]

Iterate over DataFrame rows as (index, Series) pairs.

indexlabel or tuple of label

The index of the row. A tuple for a MultiIndex.

dataSeries

The data of the row as a Series.

DataFrame.itertuples : Iterate over DataFrame rows as namedtuples of the values. DataFrame.items : Iterate over (column name, Series) pairs.

  1. Because iterrows returns a Series for each row, it does not preserve dtypes across the rows (dtypes are preserved across columns for DataFrames). For example,

    >>> df = pd.DataFrame([[1, 1.5]], columns=['int', 'float'])
    >>> row = next(df.iterrows())[1]
    >>> row
    int      1.0
    float    1.5
    Name: 0, dtype: float64
    >>> print(row['int'].dtype)
    float64
    >>> print(df['int'].dtype)
    int64
    

    To preserve dtypes while iterating over the rows, it is better to use itertuples() which returns namedtuples of the values and which is generally faster than iterrows.

  2. You should never modify something you are iterating over. This is not guaranteed to work in all cases. Depending on the data types, the iterator returns a copy and not a view, and writing to it will have no effect.

itertuples(index: bool = True, name: Optional[str] = 'Pandas')[source]

Iterate over DataFrame rows as namedtuples.

indexbool, default True

If True, return the index as the first element of the tuple.

namestr or None, default “Pandas”

The name of the returned namedtuples or None to return regular tuples.

iterator

An object to iterate over namedtuples for each row in the DataFrame with the first field possibly being the index and following fields being the column values.

DataFrame.iterrowsIterate over DataFrame rows as (index, Series)

pairs.

DataFrame.items : Iterate over (column name, Series) pairs.

The column names will be renamed to positional names if they are invalid Python identifiers, repeated, or start with an underscore. On python versions < 3.7 regular tuples are returned for DataFrames with a large number of columns (>254).

>>> df = pd.DataFrame({'num_legs': [4, 2], 'num_wings': [0, 2]},
...                   index=['dog', 'hawk'])
>>> df
      num_legs  num_wings
dog          4          0
hawk         2          2
>>> for row in df.itertuples():
...     print(row)
...
Pandas(Index='dog', num_legs=4, num_wings=0)
Pandas(Index='hawk', num_legs=2, num_wings=2)

By setting the index parameter to False we can remove the index as the first element of the tuple:

>>> for row in df.itertuples(index=False):
...     print(row)
...
Pandas(num_legs=4, num_wings=0)
Pandas(num_legs=2, num_wings=2)

With the name parameter set we set a custom name for the yielded namedtuples:

>>> for row in df.itertuples(name='Animal'):
...     print(row)
...
Animal(Index='dog', num_legs=4, num_wings=0)
Animal(Index='hawk', num_legs=2, num_wings=2)
dot(other)[source]

Compute the matrix multiplication between the DataFrame and other.

This method computes the matrix product between the DataFrame and the values of an other Series, DataFrame or a numpy array.

It can also be called using self @ other in Python >= 3.5.

otherSeries, DataFrame or array-like

The other object to compute the matrix product with.

Series or DataFrame

If other is a Series, return the matrix product between self and other as a Series. If other is a DataFrame or a numpy.array, return the matrix product of self and other in a DataFrame of a np.array.

Series.dot: Similar method for Series.

The dimensions of DataFrame and other must be compatible in order to compute the matrix multiplication. In addition, the column names of DataFrame and the index of other must contain the same values, as they will be aligned prior to the multiplication.

The dot method for Series computes the inner product, instead of the matrix product here.

Here we multiply a DataFrame with a Series.

>>> df = pd.DataFrame([[0, 1, -2, -1], [1, 1, 1, 1]])
>>> s = pd.Series([1, 1, 2, 1])
>>> df.dot(s)
0    -4
1     5
dtype: int64

Here we multiply a DataFrame with another DataFrame.

>>> other = pd.DataFrame([[0, 1], [1, 2], [-1, -1], [2, 0]])
>>> df.dot(other)
    0   1
0   1   4
1   2   2

Note that the dot method give the same result as @

>>> df @ other
    0   1
0   1   4
1   2   2

The dot method works also if other is an np.array.

>>> arr = np.array([[0, 1], [1, 2], [-1, -1], [2, 0]])
>>> df.dot(arr)
    0   1
0   1   4
1   2   2

Note how shuffling of the objects does not change the result.

>>> s2 = s.reindex([1, 0, 2, 3])
>>> df.dot(s2)
0    -4
1     5
dtype: int64
classmethod from_dict(data, orient='columns', dtype=None, columns=None)pandas.core.frame.DataFrame[source]

Construct DataFrame from dict of array-like or dicts.

Creates DataFrame object from dictionary by columns or by index allowing dtype specification.

datadict

Of the form {field : array-like} or {field : dict}.

orient{‘columns’, ‘index’}, default ‘columns’

The “orientation” of the data. If the keys of the passed dict should be the columns of the resulting DataFrame, pass ‘columns’ (default). Otherwise if the keys should be rows, pass ‘index’.

dtypedtype, default None

Data type to force, otherwise infer.

columnslist, default None

Column labels to use when orient='index'. Raises a ValueError if used with orient='columns'.

DataFrame

DataFrame.from_recordsDataFrame from structured ndarray, sequence

of tuples or dicts, or DataFrame.

DataFrame : DataFrame object creation using constructor.

By default the keys of the dict become the DataFrame columns:

>>> data = {'col_1': [3, 2, 1, 0], 'col_2': ['a', 'b', 'c', 'd']}
>>> pd.DataFrame.from_dict(data)
   col_1 col_2
0      3     a
1      2     b
2      1     c
3      0     d

Specify orient='index' to create the DataFrame using dictionary keys as rows:

>>> data = {'row_1': [3, 2, 1, 0], 'row_2': ['a', 'b', 'c', 'd']}
>>> pd.DataFrame.from_dict(data, orient='index')
       0  1  2  3
row_1  3  2  1  0
row_2  a  b  c  d

When using the ‘index’ orientation, the column names can be specified manually:

>>> pd.DataFrame.from_dict(data, orient='index',
...                        columns=['A', 'B', 'C', 'D'])
       A  B  C  D
row_1  3  2  1  0
row_2  a  b  c  d
to_numpy(dtype=None, copy: bool = False, na_value=<object object>)numpy.ndarray[source]

Convert the DataFrame to a NumPy array.

New in version 0.24.0.

By default, the dtype of the returned array will be the common NumPy dtype of all types in the DataFrame. For example, if the dtypes are float16 and float32, the results dtype will be float32. This may require copying data and coercing values, which may be expensive.

dtypestr or numpy.dtype, optional

The dtype to pass to numpy.asarray().

copybool, default False

Whether to ensure that the returned value is not a view on another array. Note that copy=False does not ensure that to_numpy() is no-copy. Rather, copy=True ensure that a copy is made, even if not strictly necessary.

na_valueAny, optional

The value to use for missing values. The default value depends on dtype and the dtypes of the DataFrame columns.

New in version 1.1.0.

numpy.ndarray

Series.to_numpy : Similar method for Series.

>>> pd.DataFrame({"A": [1, 2], "B": [3, 4]}).to_numpy()
array([[1, 3],
       [2, 4]])

With heterogeneous data, the lowest common type will have to be used.

>>> df = pd.DataFrame({"A": [1, 2], "B": [3.0, 4.5]})
>>> df.to_numpy()
array([[1. , 3. ],
       [2. , 4.5]])

For a mix of numeric and non-numeric types, the output array will have object dtype.

>>> df['C'] = pd.date_range('2000', periods=2)
>>> df.to_numpy()
array([[1, 3.0, Timestamp('2000-01-01 00:00:00')],
       [2, 4.5, Timestamp('2000-01-02 00:00:00')]], dtype=object)
to_dict(orient='dict', into=<class 'dict'>)[source]

Convert the DataFrame to a dictionary.

The type of the key-value pairs can be customized with the parameters (see below).

orientstr {‘dict’, ‘list’, ‘series’, ‘split’, ‘records’, ‘index’}

Determines the type of the values of the dictionary.

  • ‘dict’ (default) : dict like {column -> {index -> value}}

  • ‘list’ : dict like {column -> [values]}

  • ‘series’ : dict like {column -> Series(values)}

  • ‘split’ : dict like {‘index’ -> [index], ‘columns’ -> [columns], ‘data’ -> [values]}

  • ‘records’ : list like [{column -> value}, … , {column -> value}]

  • ‘index’ : dict like {index -> {column -> value}}

Abbreviations are allowed. s indicates series and sp indicates split.

intoclass, default dict

The collections.abc.Mapping subclass used for all Mappings in the return value. Can be the actual class or an empty instance of the mapping type you want. If you want a collections.defaultdict, you must pass it initialized.

dict, list or collections.abc.Mapping

Return a collections.abc.Mapping object representing the DataFrame. The resulting transformation depends on the orient parameter.

DataFrame.from_dict: Create a DataFrame from a dictionary. DataFrame.to_json: Convert a DataFrame to JSON format.

>>> df = pd.DataFrame({'col1': [1, 2],
...                    'col2': [0.5, 0.75]},
...                   index=['row1', 'row2'])
>>> df
      col1  col2
row1     1  0.50
row2     2  0.75
>>> df.to_dict()
{'col1': {'row1': 1, 'row2': 2}, 'col2': {'row1': 0.5, 'row2': 0.75}}

You can specify the return orientation.

>>> df.to_dict('series')
{'col1': row1    1
         row2    2
Name: col1, dtype: int64,
'col2': row1    0.50
        row2    0.75
Name: col2, dtype: float64}
>>> df.to_dict('split')
{'index': ['row1', 'row2'], 'columns': ['col1', 'col2'],
 'data': [[1, 0.5], [2, 0.75]]}
>>> df.to_dict('records')
[{'col1': 1, 'col2': 0.5}, {'col1': 2, 'col2': 0.75}]
>>> df.to_dict('index')
{'row1': {'col1': 1, 'col2': 0.5}, 'row2': {'col1': 2, 'col2': 0.75}}

You can also specify the mapping type.

>>> from collections import OrderedDict, defaultdict
>>> df.to_dict(into=OrderedDict)
OrderedDict([('col1', OrderedDict([('row1', 1), ('row2', 2)])),
             ('col2', OrderedDict([('row1', 0.5), ('row2', 0.75)]))])

If you want a defaultdict, you need to initialize it:

>>> dd = defaultdict(list)
>>> df.to_dict('records', into=dd)
[defaultdict(<class 'list'>, {'col1': 1, 'col2': 0.5}),
 defaultdict(<class 'list'>, {'col1': 2, 'col2': 0.75})]
to_gbq(destination_table, project_id=None, chunksize=None, reauth=False, if_exists='fail', auth_local_webserver=False, table_schema=None, location=None, progress_bar=True, credentials=None)None[source]

Write a DataFrame to a Google BigQuery table.

This function requires the pandas-gbq package.

See the How to authenticate with Google BigQuery guide for authentication instructions.

destination_tablestr

Name of table to be written, in the form dataset.tablename.

project_idstr, optional

Google BigQuery Account project ID. Optional when available from the environment.

chunksizeint, optional

Number of rows to be inserted in each chunk from the dataframe. Set to None to load the whole dataframe at once.

reauthbool, default False

Force Google BigQuery to re-authenticate the user. This is useful if multiple accounts are used.

if_existsstr, default ‘fail’

Behavior when the destination table exists. Value can be one of:

'fail'

If table exists raise pandas_gbq.gbq.TableCreationError.

'replace'

If table exists, drop it, recreate it, and insert data.

'append'

If table exists, insert data. Create if does not exist.

auth_local_webserverbool, default False

Use the local webserver flow instead of the console flow when getting user credentials.

New in version 0.2.0 of pandas-gbq.

table_schemalist of dicts, optional

List of BigQuery table fields to which according DataFrame columns conform to, e.g. [{'name': 'col1', 'type': 'STRING'},...]. If schema is not provided, it will be generated according to dtypes of DataFrame columns. See BigQuery API documentation on available names of a field.

New in version 0.3.1 of pandas-gbq.

locationstr, optional

Location where the load job should run. See the BigQuery locations documentation for a list of available locations. The location must match that of the target dataset.

New in version 0.5.0 of pandas-gbq.

progress_barbool, default True

Use the library tqdm to show the progress bar for the upload, chunk by chunk.

New in version 0.5.0 of pandas-gbq.

credentialsgoogle.auth.credentials.Credentials, optional

Credentials for accessing Google APIs. Use this parameter to override default credentials, such as to use Compute Engine google.auth.compute_engine.Credentials or Service Account google.oauth2.service_account.Credentials directly.

New in version 0.8.0 of pandas-gbq.

New in version 0.24.0.

pandas_gbq.to_gbq : This function in the pandas-gbq library. read_gbq : Read a DataFrame from Google BigQuery.

classmethod from_records(data, index=None, exclude=None, columns=None, coerce_float=False, nrows=None)pandas.core.frame.DataFrame[source]

Convert structured or record ndarray to DataFrame.

Creates a DataFrame object from a structured ndarray, sequence of tuples or dicts, or DataFrame.

datastructured ndarray, sequence of tuples or dicts, or DataFrame

Structured input data.

indexstr, list of fields, array-like

Field of array to use as the index, alternately a specific set of input labels to use.

excludesequence, default None

Columns or fields to exclude.

columnssequence, default None

Column names to use. If the passed data do not have names associated with them, this argument provides names for the columns. Otherwise this argument indicates the order of the columns in the result (any names not found in the data will become all-NA columns).

coerce_floatbool, default False

Attempt to convert values of non-string, non-numeric objects (like decimal.Decimal) to floating point, useful for SQL result sets.

nrowsint, default None

Number of rows to read if data is an iterator.

DataFrame

DataFrame.from_dict : DataFrame from dict of array-like or dicts. DataFrame : DataFrame object creation using constructor.

Data can be provided as a structured ndarray:

>>> data = np.array([(3, 'a'), (2, 'b'), (1, 'c'), (0, 'd')],
...                 dtype=[('col_1', 'i4'), ('col_2', 'U1')])
>>> pd.DataFrame.from_records(data)
   col_1 col_2
0      3     a
1      2     b
2      1     c
3      0     d

Data can be provided as a list of dicts:

>>> data = [{'col_1': 3, 'col_2': 'a'},
...         {'col_1': 2, 'col_2': 'b'},
...         {'col_1': 1, 'col_2': 'c'},
...         {'col_1': 0, 'col_2': 'd'}]
>>> pd.DataFrame.from_records(data)
   col_1 col_2
0      3     a
1      2     b
2      1     c
3      0     d

Data can be provided as a list of tuples with corresponding columns:

>>> data = [(3, 'a'), (2, 'b'), (1, 'c'), (0, 'd')]
>>> pd.DataFrame.from_records(data, columns=['col_1', 'col_2'])
   col_1 col_2
0      3     a
1      2     b
2      1     c
3      0     d
to_records(index=True, column_dtypes=None, index_dtypes=None)numpy.recarray[source]

Convert DataFrame to a NumPy record array.

Index will be included as the first field of the record array if requested.

indexbool, default True

Include index in resulting record array, stored in ‘index’ field or using the index label, if set.

column_dtypesstr, type, dict, default None

New in version 0.24.0.

If a string or type, the data type to store all columns. If a dictionary, a mapping of column names and indices (zero-indexed) to specific data types.

index_dtypesstr, type, dict, default None

New in version 0.24.0.

If a string or type, the data type to store all index levels. If a dictionary, a mapping of index level names and indices (zero-indexed) to specific data types.

This mapping is applied only if index=True.

numpy.recarray

NumPy ndarray with the DataFrame labels as fields and each row of the DataFrame as entries.

DataFrame.from_records: Convert structured or record ndarray

to DataFrame.

numpy.recarray: An ndarray that allows field access using

attributes, analogous to typed columns in a spreadsheet.

>>> df = pd.DataFrame({'A': [1, 2], 'B': [0.5, 0.75]},
...                   index=['a', 'b'])
>>> df
   A     B
a  1  0.50
b  2  0.75
>>> df.to_records()
rec.array([('a', 1, 0.5 ), ('b', 2, 0.75)],
          dtype=[('index', 'O'), ('A', '<i8'), ('B', '<f8')])

If the DataFrame index has no label then the recarray field name is set to ‘index’. If the index has a label then this is used as the field name:

>>> df.index = df.index.rename("I")
>>> df.to_records()
rec.array([('a', 1, 0.5 ), ('b', 2, 0.75)],
          dtype=[('I', 'O'), ('A', '<i8'), ('B', '<f8')])

The index can be excluded from the record array:

>>> df.to_records(index=False)
rec.array([(1, 0.5 ), (2, 0.75)],
          dtype=[('A', '<i8'), ('B', '<f8')])

Data types can be specified for the columns:

>>> df.to_records(column_dtypes={"A": "int32"})
rec.array([('a', 1, 0.5 ), ('b', 2, 0.75)],
          dtype=[('I', 'O'), ('A', '<i4'), ('B', '<f8')])

As well as for the index:

>>> df.to_records(index_dtypes="<S2")
rec.array([(b'a', 1, 0.5 ), (b'b', 2, 0.75)],
          dtype=[('I', 'S2'), ('A', '<i8'), ('B', '<f8')])
>>> index_dtypes = f"<S{df.index.str.len().max()}"
>>> df.to_records(index_dtypes=index_dtypes)
rec.array([(b'a', 1, 0.5 ), (b'b', 2, 0.75)],
          dtype=[('I', 'S1'), ('A', '<i8'), ('B', '<f8')])
to_stata(path: Union[PathLike[str], str, IO[T], io.RawIOBase, io.BufferedIOBase, io.TextIOBase, _io.TextIOWrapper, mmap.mmap], convert_dates: Optional[Dict[Optional[Hashable], str]] = None, write_index: bool = True, byteorder: Optional[str] = None, time_stamp: Optional[datetime.datetime] = None, data_label: Optional[str] = None, variable_labels: Optional[Dict[Optional[Hashable], str]] = None, version: Optional[int] = 114, convert_strl: Optional[Sequence[Optional[Hashable]]] = None, compression: Optional[Union[str, Dict[str, Any]]] = 'infer', storage_options: Optional[Dict[str, Any]] = None)None[source]

Export DataFrame object to Stata dta format.

Writes the DataFrame to a Stata dataset file. “dta” files contain a Stata dataset.

pathstr, buffer or path object

String, path object (pathlib.Path or py._path.local.LocalPath) or object implementing a binary write() function. If using a buffer then the buffer will not be automatically closed after the file data has been written.

Changed in version 1.0.0.

Previously this was “fname”

convert_datesdict

Dictionary mapping columns containing datetime types to stata internal format to use when writing the dates. Options are ‘tc’, ‘td’, ‘tm’, ‘tw’, ‘th’, ‘tq’, ‘ty’. Column can be either an integer or a name. Datetime columns that do not have a conversion type specified will be converted to ‘tc’. Raises NotImplementedError if a datetime column has timezone information.

write_indexbool

Write the index to Stata dataset.

byteorderstr

Can be “>”, “<”, “little”, or “big”. default is sys.byteorder.

time_stampdatetime

A datetime to use as file creation date. Default is the current time.

data_labelstr, optional

A label for the data set. Must be 80 characters or smaller.

variable_labelsdict

Dictionary containing columns as keys and variable labels as values. Each label must be 80 characters or smaller.

version{114, 117, 118, 119, None}, default 114

Version to use in the output dta file. Set to None to let pandas decide between 118 or 119 formats depending on the number of columns in the frame. Version 114 can be read by Stata 10 and later. Version 117 can be read by Stata 13 or later. Version 118 is supported in Stata 14 and later. Version 119 is supported in Stata 15 and later. Version 114 limits string variables to 244 characters or fewer while versions 117 and later allow strings with lengths up to 2,000,000 characters. Versions 118 and 119 support Unicode characters, and version 119 supports more than 32,767 variables.

Version 119 should usually only be used when the number of variables exceeds the capacity of dta format 118. Exporting smaller datasets in format 119 may have unintended consequences, and, as of November 2020, Stata SE cannot read version 119 files.

Changed in version 1.0.0: Added support for formats 118 and 119.

convert_strllist, optional

List of column names to convert to string columns to Stata StrL format. Only available if version is 117. Storing strings in the StrL format can produce smaller dta files if strings have more than 8 characters and values are repeated.

compressionstr or dict, default ‘infer’

For on-the-fly compression of the output dta. If string, specifies compression mode. If dict, value at key ‘method’ specifies compression mode. Compression mode must be one of {‘infer’, ‘gzip’, ‘bz2’, ‘zip’, ‘xz’, None}. If compression mode is ‘infer’ and fname is path-like, then detect compression from the following extensions: ‘.gz’, ‘.bz2’, ‘.zip’, or ‘.xz’ (otherwise no compression). If dict and compression mode is one of {‘zip’, ‘gzip’, ‘bz2’}, or inferred as one of the above, other entries passed as additional compression options.

New in version 1.1.0.

storage_optionsdict, optional

Extra options that make sense for a particular storage connection, e.g. host, port, username, password, etc., if using a URL that will be parsed by fsspec, e.g., starting “s3://”, “gcs://”. An error will be raised if providing this argument with a non-fsspec URL. See the fsspec and backend storage implementation docs for the set of allowed keys and values.

New in version 1.2.0.

NotImplementedError
  • If datetimes contain timezone information

  • Column dtype is not representable in Stata

ValueError
  • Columns listed in convert_dates are neither datetime64[ns] or datetime.datetime

  • Column listed in convert_dates is not in DataFrame

  • Categorical label contains more than 32,000 characters

read_stata : Import Stata data files. io.stata.StataWriter : Low-level writer for Stata data files. io.stata.StataWriter117 : Low-level writer for version 117 files.

>>> df = pd.DataFrame({'animal': ['falcon', 'parrot', 'falcon',
...                               'parrot'],
...                    'speed': [350, 18, 361, 15]})
>>> df.to_stata('animals.dta')  
to_feather(path: Union[PathLike[str], str, IO, io.RawIOBase, io.BufferedIOBase, io.TextIOBase, _io.TextIOWrapper, mmap.mmap], **kwargs)None[source]

Write a DataFrame to the binary Feather format.

pathstr or file-like object

If a string, it will be used as Root Directory path.

**kwargs :

Additional keywords passed to pyarrow.feather.write_feather(). Starting with pyarrow 0.17, this includes the compression, compression_level, chunksize and version keywords.

New in version 1.1.0.

to_markdown(buf: Optional[Union[IO[str], str]] = None, mode: str = 'wt', index: bool = True, storage_options: Optional[Dict[str, Any]] = None, **kwargs)Optional[str][source]

Print DataFrame in Markdown-friendly format.

New in version 1.0.0.

bufstr, Path or StringIO-like, optional, default None

Buffer to write to. If None, the output is returned as a string.

modestr, optional

Mode in which file is opened, “wt” by default.

indexbool, optional, default True

Add index (row) labels.

New in version 1.1.0.

storage_optionsdict, optional

Extra options that make sense for a particular storage connection, e.g. host, port, username, password, etc., if using a URL that will be parsed by fsspec, e.g., starting “s3://”, “gcs://”. An error will be raised if providing this argument with a non-fsspec URL. See the fsspec and backend storage implementation docs for the set of allowed keys and values.

New in version 1.2.0.

**kwargs

These parameters will be passed to tabulate.

str

DataFrame in Markdown-friendly format.

Requires the tabulate package.

>>> s = pd.Series(["elk", "pig", "dog", "quetzal"], name="animal")
>>> print(s.to_markdown())
|    | animal   |
|---:|:---------|
|  0 | elk      |
|  1 | pig      |
|  2 | dog      |
|  3 | quetzal  |

Output markdown with a tabulate option.

>>> print(s.to_markdown(tablefmt="grid"))
+----+----------+
|    | animal   |
+====+==========+
|  0 | elk      |
+----+----------+
|  1 | pig      |
+----+----------+
|  2 | dog      |
+----+----------+
|  3 | quetzal  |
+----+----------+
to_parquet(path: Optional[Union[PathLike[str], str, IO[T], io.RawIOBase, io.BufferedIOBase, io.TextIOBase, _io.TextIOWrapper, mmap.mmap]] = None, engine: str = 'auto', compression: Optional[str] = 'snappy', index: Optional[bool] = None, partition_cols: Optional[List[str]] = None, storage_options: Optional[Dict[str, Any]] = None, **kwargs)Optional[bytes][source]

Write a DataFrame to the binary parquet format.

This function writes the dataframe as a parquet file. You can choose different parquet backends, and have the option of compression. See the user guide for more details.

pathstr or file-like object, default None

If a string, it will be used as Root Directory path when writing a partitioned dataset. By file-like object, we refer to objects with a write() method, such as a file handle (e.g. via builtin open function) or io.BytesIO. The engine fastparquet does not accept file-like objects. If path is None, a bytes object is returned.

Changed in version 1.2.0.

Previously this was “fname”

engine{‘auto’, ‘pyarrow’, ‘fastparquet’}, default ‘auto’

Parquet library to use. If ‘auto’, then the option io.parquet.engine is used. The default io.parquet.engine behavior is to try ‘pyarrow’, falling back to ‘fastparquet’ if ‘pyarrow’ is unavailable.

compression{‘snappy’, ‘gzip’, ‘brotli’, None}, default ‘snappy’

Name of the compression to use. Use None for no compression.

indexbool, default None

If True, include the dataframe’s index(es) in the file output. If False, they will not be written to the file. If None, similar to True the dataframe’s index(es) will be saved. However, instead of being saved as values, the RangeIndex will be stored as a range in the metadata so it doesn’t require much space and is faster. Other indexes will be included as columns in the file output.

New in version 0.24.0.

partition_colslist, optional, default None

Column names by which to partition the dataset. Columns are partitioned in the order they are given. Must be None if path is not a string.

New in version 0.24.0.

storage_optionsdict, optional

Extra options that make sense for a particular storage connection, e.g. host, port, username, password, etc., if using a URL that will be parsed by fsspec, e.g., starting “s3://”, “gcs://”. An error will be raised if providing this argument with a non-fsspec URL. See the fsspec and backend storage implementation docs for the set of allowed keys and values.

New in version 1.2.0.

**kwargs

Additional arguments passed to the parquet library. See pandas io for more details.

bytes if no path argument is provided else None

read_parquet : Read a parquet file. DataFrame.to_csv : Write a csv file. DataFrame.to_sql : Write to a sql table. DataFrame.to_hdf : Write to hdf.

This function requires either the fastparquet or pyarrow library.

>>> df = pd.DataFrame(data={'col1': [1, 2], 'col2': [3, 4]})
>>> df.to_parquet('df.parquet.gzip',
...               compression='gzip')  
>>> pd.read_parquet('df.parquet.gzip')  
   col1  col2
0     1     3
1     2     4

If you want to get a buffer to the parquet content you can use a io.BytesIO object, as long as you don’t use partition_cols, which creates multiple files.

>>> import io
>>> f = io.BytesIO()
>>> df.to_parquet(f)
>>> f.seek(0)
0
>>> content = f.read()
to_html(buf=None, columns=None, col_space=None, header=True, index=True, na_rep='NaN', formatters=None, float_format=None, sparsify=None, index_names=True, justify=None, max_rows=None, max_cols=None, show_dimensions=False, decimal='.', bold_rows=True, classes=None, escape=True, notebook=False, border=None, table_id=None, render_links=False, encoding=None)[source]

Render a DataFrame as an HTML table.

bufstr, Path or StringIO-like, optional, default None

Buffer to write to. If None, the output is returned as a string.

columnssequence, optional, default None

The subset of columns to write. Writes all columns by default.

col_spacestr or int, list or dict of int or str, optional

The minimum width of each column in CSS length units. An int is assumed to be px units.

New in version 0.25.0: Ability to use str.

headerbool, optional

Whether to print column labels, default True.

indexbool, optional, default True

Whether to print index (row) labels.

na_repstr, optional, default ‘NaN’

String representation of NaN to use.

formatterslist, tuple or dict of one-param. functions, optional

Formatter functions to apply to columns’ elements by position or name. The result of each function must be a unicode string. List/tuple must be of length equal to the number of columns.

float_formatone-parameter function, optional, default None

Formatter function to apply to columns’ elements if they are floats. This function must return a unicode string and will be applied only to the non-NaN elements, with NaN being handled by na_rep.

Changed in version 1.2.0.

sparsifybool, optional, default True

Set to False for a DataFrame with a hierarchical index to print every multiindex key at each row.

index_namesbool, optional, default True

Prints the names of the indexes.

justifystr, default None

How to justify the column labels. If None uses the option from the print configuration (controlled by set_option), ‘right’ out of the box. Valid values are

  • left

  • right

  • center

  • justify

  • justify-all

  • start

  • end

  • inherit

  • match-parent

  • initial

  • unset.

max_rowsint, optional

Maximum number of rows to display in the console.

min_rowsint, optional

The number of rows to display in the console in a truncated repr (when number of rows is above max_rows).

max_colsint, optional

Maximum number of columns to display in the console.

show_dimensionsbool, default False

Display DataFrame dimensions (number of rows by number of columns).

decimalstr, default ‘.’

Character recognized as decimal separator, e.g. ‘,’ in Europe.

bold_rowsbool, default True

Make the row labels bold in the output.

classesstr or list or tuple, default None

CSS class(es) to apply to the resulting html table.

escapebool, default True

Convert the characters <, >, and & to HTML-safe sequences.

notebook{True, False}, default False

Whether the generated HTML is for IPython Notebook.

borderint

A border=border attribute is included in the opening <table> tag. Default pd.options.display.html.border.

encodingstr, default “utf-8”

Set character encoding.

New in version 1.0.

table_idstr, optional

A css id is included in the opening <table> tag if specified.

render_linksbool, default False

Convert URLs to HTML links.

New in version 0.24.0.

str or None

If buf is None, returns the result as a string. Otherwise returns None.

to_string : Convert DataFrame to a string.

info(verbose: Optional[bool] = None, buf: Optional[IO[str]] = None, max_cols: Optional[int] = None, memory_usage: Optional[Union[bool, str]] = None, show_counts: Optional[bool] = None, null_counts: Optional[bool] = None)None[source]

Print a concise summary of a DataFrame.

This method prints information about a DataFrame including the index dtype and columns, non-null values and memory usage.

dataDataFrame

DataFrame to print information about.

verbosebool, optional

Whether to print the full summary. By default, the setting in pandas.options.display.max_info_columns is followed.

bufwritable buffer, defaults to sys.stdout

Where to send the output. By default, the output is printed to sys.stdout. Pass a writable buffer if you need to further process the output.

max_colsint, optional

When to switch from the verbose to the truncated output. If the DataFrame has more than max_cols columns, the truncated output is used. By default, the setting in pandas.options.display.max_info_columns is used.

memory_usagebool, str, optional

Specifies whether total memory usage of the DataFrame elements (including the index) should be displayed. By default, this follows the pandas.options.display.memory_usage setting.

True always show memory usage. False never shows memory usage. A value of ‘deep’ is equivalent to “True with deep introspection”. Memory usage is shown in human-readable units (base-2 representation). Without deep introspection a memory estimation is made based in column dtype and number of rows assuming values consume the same memory amount for corresponding dtypes. With deep memory introspection, a real memory usage calculation is performed at the cost of computational resources.

show_countsbool, optional

Whether to show the non-null counts. By default, this is shown only if the DataFrame is smaller than pandas.options.display.max_info_rows and pandas.options.display.max_info_columns. A value of True always shows the counts, and False never shows the counts.

null_countsbool, optional

Deprecated since version 1.2.0: Use show_counts instead.

None

This method prints a summary of a DataFrame and returns None.

DataFrame.describe: Generate descriptive statistics of DataFrame

columns.

DataFrame.memory_usage: Memory usage of DataFrame columns.

>>> int_values = [1, 2, 3, 4, 5]
>>> text_values = ['alpha', 'beta', 'gamma', 'delta', 'epsilon']
>>> float_values = [0.0, 0.25, 0.5, 0.75, 1.0]
>>> df = pd.DataFrame({"int_col": int_values, "text_col": text_values,
...                   "float_col": float_values})
>>> df
    int_col text_col  float_col
0        1    alpha       0.00
1        2     beta       0.25
2        3    gamma       0.50
3        4    delta       0.75
4        5  epsilon       1.00

Prints information of all columns:

>>> df.info(verbose=True)
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 5 entries, 0 to 4
Data columns (total 3 columns):
 #   Column     Non-Null Count  Dtype
---  ------     --------------  -----
 0   int_col    5 non-null      int64
 1   text_col   5 non-null      object
 2   float_col  5 non-null      float64
dtypes: float64(1), int64(1), object(1)
memory usage: 248.0+ bytes

Prints a summary of columns count and its dtypes but not per column information:

>>> df.info(verbose=False)
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 5 entries, 0 to 4
Columns: 3 entries, int_col to float_col
dtypes: float64(1), int64(1), object(1)
memory usage: 248.0+ bytes

Pipe output of DataFrame.info to buffer instead of sys.stdout, get buffer content and writes to a text file:

>>> import io
>>> buffer = io.StringIO()
>>> df.info(buf=buffer)
>>> s = buffer.getvalue()
>>> with open("df_info.txt", "w",
...           encoding="utf-8") as f:  
...     f.write(s)
260

The memory_usage parameter allows deep introspection mode, specially useful for big DataFrames and fine-tune memory optimization:

>>> random_strings_array = np.random.choice(['a', 'b', 'c'], 10 ** 6)
>>> df = pd.DataFrame({
...     'column_1': np.random.choice(['a', 'b', 'c'], 10 ** 6),
...     'column_2': np.random.choice(['a', 'b', 'c'], 10 ** 6),
...     'column_3': np.random.choice(['a', 'b', 'c'], 10 ** 6)
... })
>>> df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1000000 entries, 0 to 999999
Data columns (total 3 columns):
 #   Column    Non-Null Count    Dtype
---  ------    --------------    -----
 0   column_1  1000000 non-null  object
 1   column_2  1000000 non-null  object
 2   column_3  1000000 non-null  object
dtypes: object(3)
memory usage: 22.9+ MB
>>> df.info(memory_usage='deep')
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1000000 entries, 0 to 999999
Data columns (total 3 columns):
 #   Column    Non-Null Count    Dtype
---  ------    --------------    -----
 0   column_1  1000000 non-null  object
 1   column_2  1000000 non-null  object
 2   column_3  1000000 non-null  object
dtypes: object(3)
memory usage: 165.9 MB
memory_usage(index=True, deep=False)pandas.core.series.Series[source]

Return the memory usage of each column in bytes.

The memory usage can optionally include the contribution of the index and elements of object dtype.

This value is displayed in DataFrame.info by default. This can be suppressed by setting pandas.options.display.memory_usage to False.

indexbool, default True

Specifies whether to include the memory usage of the DataFrame’s index in returned Series. If index=True, the memory usage of the index is the first item in the output.

deepbool, default False

If True, introspect the data deeply by interrogating object dtypes for system-level memory consumption, and include it in the returned values.

Series

A Series whose index is the original column names and whose values is the memory usage of each column in bytes.

numpy.ndarray.nbytesTotal bytes consumed by the elements of an

ndarray.

Series.memory_usage : Bytes consumed by a Series. Categorical : Memory-efficient array for string values with

many repeated values.

DataFrame.info : Concise summary of a DataFrame.

>>> dtypes = ['int64', 'float64', 'complex128', 'object', 'bool']
>>> data = dict([(t, np.ones(shape=5000, dtype=int).astype(t))
...              for t in dtypes])
>>> df = pd.DataFrame(data)
>>> df.head()
   int64  float64            complex128  object  bool
0      1      1.0              1.0+0.0j       1  True
1      1      1.0              1.0+0.0j       1  True
2      1      1.0              1.0+0.0j       1  True
3      1      1.0              1.0+0.0j       1  True
4      1      1.0              1.0+0.0j       1  True
>>> df.memory_usage()
Index           128
int64         40000
float64       40000
complex128    80000
object        40000
bool           5000
dtype: int64
>>> df.memory_usage(index=False)
int64         40000
float64       40000
complex128    80000
object        40000
bool           5000
dtype: int64

The memory footprint of object dtype columns is ignored by default:

>>> df.memory_usage(deep=True)
Index            128
int64          40000
float64        40000
complex128     80000
object        180000
bool            5000
dtype: int64

Use a Categorical for efficient storage of an object-dtype column with many repeated values.

>>> df['object'].astype('category').memory_usage(deep=True)
5244
transpose(*args, copy: bool = False)pandas.core.frame.DataFrame[source]

Transpose index and columns.

Reflect the DataFrame over its main diagonal by writing rows as columns and vice-versa. The property T is an accessor to the method transpose().

*argstuple, optional

Accepted for compatibility with NumPy.

copybool, default False

Whether to copy the data after transposing, even for DataFrames with a single dtype.

Note that a copy is always required for mixed dtype DataFrames, or for DataFrames with any extension types.

DataFrame

The transposed DataFrame.

numpy.transpose : Permute the dimensions of a given array.

Transposing a DataFrame with mixed dtypes will result in a homogeneous DataFrame with the object dtype. In such a case, a copy of the data is always made.

Square DataFrame with homogeneous dtype

>>> d1 = {'col1': [1, 2], 'col2': [3, 4]}
>>> df1 = pd.DataFrame(data=d1)
>>> df1
   col1  col2
0     1     3
1     2     4
>>> df1_transposed = df1.T # or df1.transpose()
>>> df1_transposed
      0  1
col1  1  2
col2  3  4

When the dtype is homogeneous in the original DataFrame, we get a transposed DataFrame with the same dtype:

>>> df1.dtypes
col1    int64
col2    int64
dtype: object
>>> df1_transposed.dtypes
0    int64
1    int64
dtype: object

Non-square DataFrame with mixed dtypes

>>> d2 = {'name': ['Alice', 'Bob'],
...       'score': [9.5, 8],
...       'employed': [False, True],
...       'kids': [0, 0]}
>>> df2 = pd.DataFrame(data=d2)
>>> df2
    name  score  employed  kids
0  Alice    9.5     False     0
1    Bob    8.0      True     0
>>> df2_transposed = df2.T # or df2.transpose()
>>> df2_transposed
              0     1
name      Alice   Bob
score       9.5   8.0
employed  False  True
kids          0     0

When the DataFrame has mixed dtypes, we get a transposed DataFrame with the object dtype:

>>> df2.dtypes
name         object
score       float64
employed       bool
kids          int64
dtype: object
>>> df2_transposed.dtypes
0    object
1    object
dtype: object
property T
query(expr, inplace=False, **kwargs)[source]

Query the columns of a DataFrame with a boolean expression.

exprstr

The query string to evaluate.

You can refer to variables in the environment by prefixing them with an ‘@’ character like @a + b.

You can refer to column names that are not valid Python variable names by surrounding them in backticks. Thus, column names containing spaces or punctuations (besides underscores) or starting with digits must be surrounded by backticks. (For example, a column named “Area (cm^2) would be referenced as Area (cm^2)). Column names which are Python keywords (like “list”, “for”, “import”, etc) cannot be used.

For example, if one of your columns is called a a and you want to sum it with b, your query should be `a a` + b.

New in version 0.25.0: Backtick quoting introduced.

New in version 1.0.0: Expanding functionality of backtick quoting for more than only spaces.

inplacebool

Whether the query should modify the data in place or return a modified copy.

**kwargs

See the documentation for eval() for complete details on the keyword arguments accepted by DataFrame.query().

DataFrame or None

DataFrame resulting from the provided query expression or None if inplace=True.

evalEvaluate a string describing operations on

DataFrame columns.

DataFrame.evalEvaluate a string describing operations on

DataFrame columns.

The result of the evaluation of this expression is first passed to DataFrame.loc and if that fails because of a multidimensional key (e.g., a DataFrame) then the result will be passed to DataFrame.__getitem__().

This method uses the top-level eval() function to evaluate the passed query.

The query() method uses a slightly modified Python syntax by default. For example, the & and | (bitwise) operators have the precedence of their boolean cousins, and and or. This is syntactically valid Python, however the semantics are different.

You can change the semantics of the expression by passing the keyword argument parser='python'. This enforces the same semantics as evaluation in Python space. Likewise, you can pass engine='python' to evaluate an expression using Python itself as a backend. This is not recommended as it is inefficient compared to using numexpr as the engine.

The DataFrame.index and DataFrame.columns attributes of the DataFrame instance are placed in the query namespace by default, which allows you to treat both the index and columns of the frame as a column in the frame. The identifier index is used for the frame index; you can also use the name of the index to identify it in a query. Please note that Python keywords may not be used as identifiers.

For further details and examples see the query documentation in indexing.

Backtick quoted variables

Backtick quoted variables are parsed as literal Python code and are converted internally to a Python valid identifier. This can lead to the following problems.

During parsing a number of disallowed characters inside the backtick quoted string are replaced by strings that are allowed as a Python identifier. These characters include all operators in Python, the space character, the question mark, the exclamation mark, the dollar sign, and the euro sign. For other characters that fall outside the ASCII range (U+0001..U+007F) and those that are not further specified in PEP 3131, the query parser will raise an error. This excludes whitespace different than the space character, but also the hashtag (as it is used for comments) and the backtick itself (backtick can also not be escaped).

In a special case, quotes that make a pair around a backtick can confuse the parser. For example, `it's` > `that's` will raise an error, as it forms a quoted string ('s > `that') with a backtick inside.

See also the Python documentation about lexical analysis (https://docs.python.org/3/reference/lexical_analysis.html) in combination with the source code in pandas.core.computation.parsing.

>>> df = pd.DataFrame({'A': range(1, 6),
...                    'B': range(10, 0, -2),
...                    'C C': range(10, 5, -1)})
>>> df
   A   B  C C
0  1  10   10
1  2   8    9
2  3   6    8
3  4   4    7
4  5   2    6
>>> df.query('A > B')
   A  B  C C
4  5  2    6

The previous expression is equivalent to

>>> df[df.A > df.B]
   A  B  C C
4  5  2    6

For columns with spaces in their name, you can use backtick quoting.

>>> df.query('B == `C C`')
   A   B  C C
0  1  10   10

The previous expression is equivalent to

>>> df[df.B == df['C C']]
   A   B  C C
0  1  10   10
eval(expr, inplace=False, **kwargs)[source]

Evaluate a string describing operations on DataFrame columns.

Operates on columns only, not specific rows or elements. This allows eval to run arbitrary code, which can make you vulnerable to code injection if you pass user input to this function.

exprstr

The expression string to evaluate.

inplacebool, default False

If the expression contains an assignment, whether to perform the operation inplace and mutate the existing DataFrame. Otherwise, a new DataFrame is returned.

**kwargs

See the documentation for eval() for complete details on the keyword arguments accepted by query().

ndarray, scalar, pandas object, or None

The result of the evaluation or None if inplace=True.

DataFrame.queryEvaluates a boolean expression to query the columns

of a frame.

DataFrame.assignCan evaluate an expression or function to create new

values for a column.

evalEvaluate a Python expression as a string using various

backends.

For more details see the API documentation for eval(). For detailed examples see enhancing performance with eval.

>>> df = pd.DataFrame({'A': range(1, 6), 'B': range(10, 0, -2)})
>>> df
   A   B
0  1  10
1  2   8
2  3   6
3  4   4
4  5   2
>>> df.eval('A + B')
0    11
1    10
2     9
3     8
4     7
dtype: int64

Assignment is allowed though by default the original DataFrame is not modified.

>>> df.eval('C = A + B')
   A   B   C
0  1  10  11
1  2   8  10
2  3   6   9
3  4   4   8
4  5   2   7
>>> df
   A   B
0  1  10
1  2   8
2  3   6
3  4   4
4  5   2

Use inplace=True to modify the original DataFrame.

>>> df.eval('C = A + B', inplace=True)
>>> df
   A   B   C
0  1  10  11
1  2   8  10
2  3   6   9
3  4   4   8
4  5   2   7

Multiple columns can be assigned to using multi-line expressions:

>>> df.eval(
...     '''
... C = A + B
... D = A - B
... '''
... )
   A   B   C  D
0  1  10  11 -9
1  2   8  10 -6
2  3   6   9 -3
3  4   4   8  0
4  5   2   7  3
select_dtypes(include=None, exclude=None)pandas.core.frame.DataFrame[source]

Return a subset of the DataFrame’s columns based on the column dtypes.

include, excludescalar or list-like

A selection of dtypes or strings to be included/excluded. At least one of these parameters must be supplied.

DataFrame

The subset of the frame including the dtypes in include and excluding the dtypes in exclude.

ValueError
  • If both of include and exclude are empty

  • If include and exclude have overlapping elements

  • If any kind of string dtype is passed in.

DataFrame.dtypes: Return Series with the data type of each column.

  • To select all numeric types, use np.number or 'number'

  • To select strings you must use the object dtype, but note that this will return all object dtype columns

  • See the numpy dtype hierarchy

  • To select datetimes, use np.datetime64, 'datetime' or 'datetime64'

  • To select timedeltas, use np.timedelta64, 'timedelta' or 'timedelta64'

  • To select Pandas categorical dtypes, use 'category'

  • To select Pandas datetimetz dtypes, use 'datetimetz' (new in 0.20.0) or 'datetime64[ns, tz]'

>>> df = pd.DataFrame({'a': [1, 2] * 3,
...                    'b': [True, False] * 3,
...                    'c': [1.0, 2.0] * 3})
>>> df
        a      b  c
0       1   True  1.0
1       2  False  2.0
2       1   True  1.0
3       2  False  2.0
4       1   True  1.0
5       2  False  2.0
>>> df.select_dtypes(include='bool')
   b
0  True
1  False
2  True
3  False
4  True
5  False
>>> df.select_dtypes(include=['float64'])
   c
0  1.0
1  2.0
2  1.0
3  2.0
4  1.0
5  2.0
>>> df.select_dtypes(exclude=['int64'])
       b    c
0   True  1.0
1  False  2.0
2   True  1.0
3  False  2.0
4   True  1.0
5  False  2.0
insert(loc, column, value, allow_duplicates=False)None[source]

Insert column into DataFrame at specified location.

Raises a ValueError if column is already contained in the DataFrame, unless allow_duplicates is set to True.

locint

Insertion index. Must verify 0 <= loc <= len(columns).

columnstr, number, or hashable object

Label of the inserted column.

value : int, Series, or array-like allow_duplicates : bool, optional

assign(**kwargs)pandas.core.frame.DataFrame[source]

Assign new columns to a DataFrame.

Returns a new object with all original columns in addition to new ones. Existing columns that are re-assigned will be overwritten.

**kwargsdict of {str: callable or Series}

The column names are keywords. If the values are callable, they are computed on the DataFrame and assigned to the new columns. The callable must not change input DataFrame (though pandas doesn’t check it). If the values are not callable, (e.g. a Series, scalar, or array), they are simply assigned.

DataFrame

A new DataFrame with the new columns in addition to all the existing columns.

Assigning multiple columns within the same assign is possible. Later items in ‘**kwargs’ may refer to newly created or modified columns in ‘df’; items are computed and assigned into ‘df’ in order.

>>> df = pd.DataFrame({'temp_c': [17.0, 25.0]},
...                   index=['Portland', 'Berkeley'])
>>> df
          temp_c
Portland    17.0
Berkeley    25.0

Where the value is a callable, evaluated on df:

>>> df.assign(temp_f=lambda x: x.temp_c * 9 / 5 + 32)
          temp_c  temp_f
Portland    17.0    62.6
Berkeley    25.0    77.0

Alternatively, the same behavior can be achieved by directly referencing an existing Series or sequence:

>>> df.assign(temp_f=df['temp_c'] * 9 / 5 + 32)
          temp_c  temp_f
Portland    17.0    62.6
Berkeley    25.0    77.0

You can create multiple columns within the same assign where one of the columns depends on another one defined within the same assign:

>>> df.assign(temp_f=lambda x: x['temp_c'] * 9 / 5 + 32,
...           temp_k=lambda x: (x['temp_f'] +  459.67) * 5 / 9)
          temp_c  temp_f  temp_k
Portland    17.0    62.6  290.15
Berkeley    25.0    77.0  298.15
lookup(row_labels, col_labels)numpy.ndarray[source]

Label-based “fancy indexing” function for DataFrame. Given equal-length arrays of row and column labels, return an array of the values corresponding to each (row, col) pair.

Deprecated since version 1.2.0: DataFrame.lookup is deprecated, use DataFrame.melt and DataFrame.loc instead. For further details see Looking up values by index/column labels.

row_labelssequence

The row labels to use for lookup.

col_labelssequence

The column labels to use for lookup.

numpy.ndarray

The found values.

align(other, join='outer', axis=None, level=None, copy=True, fill_value=None, method=None, limit=None, fill_axis=0, broadcast_axis=None)pandas.core.frame.DataFrame[source]

Align two objects on their axes with the specified join method.

Join method is specified for each axis Index.

other : DataFrame or Series join : {‘outer’, ‘inner’, ‘left’, ‘right’}, default ‘outer’ axis : allowed axis of the other object, default None

Align on index (0), columns (1), or both (None).

levelint or level name, default None

Broadcast across a level, matching Index values on the passed MultiIndex level.

copybool, default True

Always returns new objects. If copy=False and no reindexing is required then original objects are returned.

fill_valuescalar, default np.NaN

Value to use for missing values. Defaults to NaN, but can be any “compatible” value.

method{‘backfill’, ‘bfill’, ‘pad’, ‘ffill’, None}, default None

Method to use for filling holes in reindexed Series:

  • pad / ffill: propagate last valid observation forward to next valid.

  • backfill / bfill: use NEXT valid observation to fill gap.

limitint, default None

If method is specified, this is the maximum number of consecutive NaN values to forward/backward fill. In other words, if there is a gap with more than this number of consecutive NaNs, it will only be partially filled. If method is not specified, this is the maximum number of entries along the entire axis where NaNs will be filled. Must be greater than 0 if not None.

fill_axis{0 or ‘index’, 1 or ‘columns’}, default 0

Filling axis, method and limit.

broadcast_axis{0 or ‘index’, 1 or ‘columns’}, default None

Broadcast values along this axis, if aligning two objects of different dimensions.

(left, right)(DataFrame, type of other)

Aligned objects.

set_axis(labels, axis: Union[str, int] = 0, inplace: bool = False)[source]

Assign desired index to given axis.

Indexes for column or row labels can be changed by assigning a list-like or Index.

labelslist-like, Index

The values for the new index.

axis{0 or ‘index’, 1 or ‘columns’}, default 0

The axis to update. The value 0 identifies the rows, and 1 identifies the columns.

inplacebool, default False

Whether to return a new DataFrame instance.

renamedDataFrame or None

An object of type DataFrame or None if inplace=True.

DataFrame.rename_axis : Alter the name of the index or columns.

>>> df = pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]})

Change the row labels.

>>> df.set_axis(['a', 'b', 'c'], axis='index')
   A  B
a  1  4
b  2  5
c  3  6

Change the column labels.

>>> df.set_axis(['I', 'II'], axis='columns')
   I  II
0  1   4
1  2   5
2  3   6

Now, update the labels inplace.

>>> df.set_axis(['i', 'ii'], axis='columns', inplace=True)
>>> df
   i  ii
0  1   4
1  2   5
2  3   6
reindex(labels=None, index=None, columns=None, axis=None, method=None, copy=True, level=None, fill_value=nan, limit=None, tolerance=None)pandas.core.frame.DataFrame[source]

Conform Series/DataFrame to new index with optional filling logic.

Places NA/NaN in locations having no value in the previous index. A new object is produced unless the new index is equivalent to the current one and copy=False.

keywords for axesarray-like, optional

New labels / index to conform to, should be specified using keywords. Preferably an Index object to avoid duplicating data.

method{None, ‘backfill’/’bfill’, ‘pad’/’ffill’, ‘nearest’}

Method to use for filling holes in reindexed DataFrame. Please note: this is only applicable to DataFrames/Series with a monotonically increasing/decreasing index.

  • None (default): don’t fill gaps

  • pad / ffill: Propagate last valid observation forward to next valid.

  • backfill / bfill: Use next valid observation to fill gap.

  • nearest: Use nearest valid observations to fill gap.

copybool, default True

Return a new object, even if the passed indexes are the same.

levelint or name

Broadcast across a level, matching Index values on the passed MultiIndex level.

fill_valuescalar, default np.NaN

Value to use for missing values. Defaults to NaN, but can be any “compatible” value.

limitint, default None

Maximum number of consecutive elements to forward or backward fill.

toleranceoptional

Maximum distance between original and new labels for inexact matches. The values of the index at the matching locations most satisfy the equation abs(index[indexer] - target) <= tolerance.

Tolerance may be a scalar value, which applies the same tolerance to all values, or list-like, which applies variable tolerance per element. List-like includes list, tuple, array, Series, and must be the same size as the index and its dtype must exactly match the index’s type.

Series/DataFrame with changed index.

DataFrame.set_index : Set row labels. DataFrame.reset_index : Remove row labels or move them to new columns. DataFrame.reindex_like : Change to same indices as other DataFrame.

DataFrame.reindex supports two calling conventions

  • (index=index_labels, columns=column_labels, ...)

  • (labels, axis={'index', 'columns'}, ...)

We highly recommend using keyword arguments to clarify your intent.

Create a dataframe with some fictional data.

>>> index = ['Firefox', 'Chrome', 'Safari', 'IE10', 'Konqueror']
>>> df = pd.DataFrame({'http_status': [200, 200, 404, 404, 301],
...                   'response_time': [0.04, 0.02, 0.07, 0.08, 1.0]},
...                   index=index)
>>> df
           http_status  response_time
Firefox            200           0.04
Chrome             200           0.02
Safari             404           0.07
IE10               404           0.08
Konqueror          301           1.00

Create a new index and reindex the dataframe. By default values in the new index that do not have corresponding records in the dataframe are assigned NaN.

>>> new_index = ['Safari', 'Iceweasel', 'Comodo Dragon', 'IE10',
...              'Chrome']
>>> df.reindex(new_index)
               http_status  response_time
Safari               404.0           0.07
Iceweasel              NaN            NaN
Comodo Dragon          NaN            NaN
IE10                 404.0           0.08
Chrome               200.0           0.02

We can fill in the missing values by passing a value to the keyword fill_value. Because the index is not monotonically increasing or decreasing, we cannot use arguments to the keyword method to fill the NaN values.

>>> df.reindex(new_index, fill_value=0)
               http_status  response_time
Safari                 404           0.07
Iceweasel                0           0.00
Comodo Dragon            0           0.00
IE10                   404           0.08
Chrome                 200           0.02
>>> df.reindex(new_index, fill_value='missing')
              http_status response_time
Safari                404          0.07
Iceweasel         missing       missing
Comodo Dragon     missing       missing
IE10                  404          0.08
Chrome                200          0.02

We can also reindex the columns.

>>> df.reindex(columns=['http_status', 'user_agent'])
           http_status  user_agent
Firefox            200         NaN
Chrome             200         NaN
Safari             404         NaN
IE10               404         NaN
Konqueror          301         NaN

Or we can use “axis-style” keyword arguments

>>> df.reindex(['http_status', 'user_agent'], axis="columns")
           http_status  user_agent
Firefox            200         NaN
Chrome             200         NaN
Safari             404         NaN
IE10               404         NaN
Konqueror          301         NaN

To further illustrate the filling functionality in reindex, we will create a dataframe with a monotonically increasing index (for example, a sequence of dates).

>>> date_index = pd.date_range('1/1/2010', periods=6, freq='D')
>>> df2 = pd.DataFrame({"prices": [100, 101, np.nan, 100, 89, 88]},
...                    index=date_index)
>>> df2
            prices
2010-01-01   100.0
2010-01-02   101.0
2010-01-03     NaN
2010-01-04   100.0
2010-01-05    89.0
2010-01-06    88.0

Suppose we decide to expand the dataframe to cover a wider date range.

>>> date_index2 = pd.date_range('12/29/2009', periods=10, freq='D')
>>> df2.reindex(date_index2)
            prices
2009-12-29     NaN
2009-12-30     NaN
2009-12-31     NaN
2010-01-01   100.0
2010-01-02   101.0
2010-01-03     NaN
2010-01-04   100.0
2010-01-05    89.0
2010-01-06    88.0
2010-01-07     NaN

The index entries that did not have a value in the original data frame (for example, ‘2009-12-29’) are by default filled with NaN. If desired, we can fill in the missing values using one of several options.

For example, to back-propagate the last valid value to fill the NaN values, pass bfill as an argument to the method keyword.

>>> df2.reindex(date_index2, method='bfill')
            prices
2009-12-29   100.0
2009-12-30   100.0
2009-12-31   100.0
2010-01-01   100.0
2010-01-02   101.0
2010-01-03     NaN
2010-01-04   100.0
2010-01-05    89.0
2010-01-06    88.0
2010-01-07     NaN

Please note that the NaN value present in the original dataframe (at index value 2010-01-03) will not be filled by any of the value propagation schemes. This is because filling while reindexing does not look at dataframe values, but only compares the original and desired indexes. If you do want to fill in the NaN values present in the original dataframe, use the fillna() method.

See the user guide for more.

drop(labels=None, axis=0, index=None, columns=None, level=None, inplace=False, errors='raise')[source]

Drop specified labels from rows or columns.

Remove rows or columns by specifying label names and corresponding axis, or by specifying directly index or column names. When using a multi-index, labels on different levels can be removed by specifying the level.

labelssingle label or list-like

Index or column labels to drop.

axis{0 or ‘index’, 1 or ‘columns’}, default 0

Whether to drop labels from the index (0 or ‘index’) or columns (1 or ‘columns’).

indexsingle label or list-like

Alternative to specifying axis (labels, axis=0 is equivalent to index=labels).

columnssingle label or list-like

Alternative to specifying axis (labels, axis=1 is equivalent to columns=labels).

levelint or level name, optional

For MultiIndex, level from which the labels will be removed.

inplacebool, default False

If False, return a copy. Otherwise, do operation inplace and return None.

errors{‘ignore’, ‘raise’}, default ‘raise’

If ‘ignore’, suppress error and only existing labels are dropped.

DataFrame or None

DataFrame without the removed index or column labels or None if inplace=True.

KeyError

If any of the labels is not found in the selected axis.

DataFrame.loc : Label-location based indexer for selection by label. DataFrame.dropna : Return DataFrame with labels on given axis omitted

where (all or any) data are missing.

DataFrame.drop_duplicatesReturn DataFrame with duplicate rows

removed, optionally only considering certain columns.

Series.drop : Return Series with specified index labels removed.

>>> df = pd.DataFrame(np.arange(12).reshape(3, 4),
...                   columns=['A', 'B', 'C', 'D'])
>>> df
   A  B   C   D
0  0  1   2   3
1  4  5   6   7
2  8  9  10  11

Drop columns

>>> df.drop(['B', 'C'], axis=1)
   A   D
0  0   3
1  4   7
2  8  11
>>> df.drop(columns=['B', 'C'])
   A   D
0  0   3
1  4   7
2  8  11

Drop a row by index

>>> df.drop([0, 1])
   A  B   C   D
2  8  9  10  11

Drop columns and/or rows of MultiIndex DataFrame

>>> midx = pd.MultiIndex(levels=[['lama', 'cow', 'falcon'],
...                              ['speed', 'weight', 'length']],
...                      codes=[[0, 0, 0, 1, 1, 1, 2, 2, 2],
...                             [0, 1, 2, 0, 1, 2, 0, 1, 2]])
>>> df = pd.DataFrame(index=midx, columns=['big', 'small'],
...                   data=[[45, 30], [200, 100], [1.5, 1], [30, 20],
...                         [250, 150], [1.5, 0.8], [320, 250],
...                         [1, 0.8], [0.3, 0.2]])
>>> df
                big     small
lama    speed   45.0    30.0
        weight  200.0   100.0
        length  1.5     1.0
cow     speed   30.0    20.0
        weight  250.0   150.0
        length  1.5     0.8
falcon  speed   320.0   250.0
        weight  1.0     0.8
        length  0.3     0.2
>>> df.drop(index='cow', columns='small')
                big
lama    speed   45.0
        weight  200.0
        length  1.5
falcon  speed   320.0
        weight  1.0
        length  0.3
>>> df.drop(index='length', level=1)
                big     small
lama    speed   45.0    30.0
        weight  200.0   100.0
cow     speed   30.0    20.0
        weight  250.0   150.0
falcon  speed   320.0   250.0
        weight  1.0     0.8
rename(mapper: Optional[Union[Mapping[Optional[Hashable], Any], Callable[[Optional[Hashable]], Optional[Hashable]]]] = None, index: Optional[Union[Mapping[Optional[Hashable], Any], Callable[[Optional[Hashable]], Optional[Hashable]]]] = None, columns: Optional[Union[Mapping[Optional[Hashable], Any], Callable[[Optional[Hashable]], Optional[Hashable]]]] = None, axis: Optional[Union[str, int]] = None, copy: bool = True, inplace: bool = False, level: Union[Hashable, None, int] = None, errors: str = 'ignore')Optional[pandas.core.frame.DataFrame][source]

Alter axes labels.

Function / dict values must be unique (1-to-1). Labels not contained in a dict / Series will be left as-is. Extra labels listed don’t throw an error.

See the user guide for more.

mapperdict-like or function

Dict-like or function transformations to apply to that axis’ values. Use either mapper and axis to specify the axis to target with mapper, or index and columns.

indexdict-like or function

Alternative to specifying axis (mapper, axis=0 is equivalent to index=mapper).

columnsdict-like or function

Alternative to specifying axis (mapper, axis=1 is equivalent to columns=mapper).

axis{0 or ‘index’, 1 or ‘columns’}, default 0

Axis to target with mapper. Can be either the axis name (‘index’, ‘columns’) or number (0, 1). The default is ‘index’.

copybool, default True

Also copy underlying data.

inplacebool, default False

Whether to return a new DataFrame. If True then value of copy is ignored.

levelint or level name, default None

In case of a MultiIndex, only rename labels in the specified level.

errors{‘ignore’, ‘raise’}, default ‘ignore’

If ‘raise’, raise a KeyError when a dict-like mapper, index, or columns contains labels that are not present in the Index being transformed. If ‘ignore’, existing keys will be renamed and extra keys will be ignored.

DataFrame or None

DataFrame with the renamed axis labels or None if inplace=True.

KeyError

If any of the labels is not found in the selected axis and “errors=’raise’”.

DataFrame.rename_axis : Set the name of the axis.

DataFrame.rename supports two calling conventions

  • (index=index_mapper, columns=columns_mapper, ...)

  • (mapper, axis={'index', 'columns'}, ...)

We highly recommend using keyword arguments to clarify your intent.

Rename columns using a mapping:

>>> df = pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]})
>>> df.rename(columns={"A": "a", "B": "c"})
   a  c
0  1  4
1  2  5
2  3  6

Rename index using a mapping:

>>> df.rename(index={0: "x", 1: "y", 2: "z"})
   A  B
x  1  4
y  2  5
z  3  6

Cast index labels to a different type:

>>> df.index
RangeIndex(start=0, stop=3, step=1)
>>> df.rename(index=str).index
Index(['0', '1', '2'], dtype='object')
>>> df.rename(columns={"A": "a", "B": "b", "C": "c"}, errors="raise")
Traceback (most recent call last):
KeyError: ['C'] not found in axis

Using axis-style parameters:

>>> df.rename(str.lower, axis='columns')
   a  b
0  1  4
1  2  5
2  3  6
>>> df.rename({1: 2, 2: 4}, axis='index')
   A  B
0  1  4
2  2  5
4  3  6
fillna(value=None, method=None, axis=None, inplace=False, limit=None, downcast=None)Optional[pandas.core.frame.DataFrame][source]

Fill NA/NaN values using the specified method.

valuescalar, dict, Series, or DataFrame

Value to use to fill holes (e.g. 0), alternately a dict/Series/DataFrame of values specifying which value to use for each index (for a Series) or column (for a DataFrame). Values not in the dict/Series/DataFrame will not be filled. This value cannot be a list.

method{‘backfill’, ‘bfill’, ‘pad’, ‘ffill’, None}, default None

Method to use for filling holes in reindexed Series pad / ffill: propagate last valid observation forward to next valid backfill / bfill: use next valid observation to fill gap.

axis{0 or ‘index’, 1 or ‘columns’}

Axis along which to fill missing values.

inplacebool, default False

If True, fill in-place. Note: this will modify any other views on this object (e.g., a no-copy slice for a column in a DataFrame).

limitint, default None

If method is specified, this is the maximum number of consecutive NaN values to forward/backward fill. In other words, if there is a gap with more than this number of consecutive NaNs, it will only be partially filled. If method is not specified, this is the maximum number of entries along the entire axis where NaNs will be filled. Must be greater than 0 if not None.

downcastdict, default is None

A dict of item->dtype of what to downcast if possible, or the string ‘infer’ which will try to downcast to an appropriate equal type (e.g. float64 to int64 if possible).

DataFrame or None

Object with missing values filled or None if inplace=True.

interpolate : Fill NaN values using interpolation. reindex : Conform object to new index. asfreq : Convert TimeSeries to specified frequency.

>>> df = pd.DataFrame([[np.nan, 2, np.nan, 0],
...                    [3, 4, np.nan, 1],
...                    [np.nan, np.nan, np.nan, 5],
...                    [np.nan, 3, np.nan, 4]],
...                   columns=list('ABCD'))
>>> df
     A    B   C  D
0  NaN  2.0 NaN  0
1  3.0  4.0 NaN  1
2  NaN  NaN NaN  5
3  NaN  3.0 NaN  4

Replace all NaN elements with 0s.

>>> df.fillna(0)
    A   B   C   D
0   0.0 2.0 0.0 0
1   3.0 4.0 0.0 1
2   0.0 0.0 0.0 5
3   0.0 3.0 0.0 4

We can also propagate non-null values forward or backward.

>>> df.fillna(method='ffill')
    A   B   C   D
0   NaN 2.0 NaN 0
1   3.0 4.0 NaN 1
2   3.0 4.0 NaN 5
3   3.0 3.0 NaN 4

Replace all NaN elements in column ‘A’, ‘B’, ‘C’, and ‘D’, with 0, 1, 2, and 3 respectively.

>>> values = {'A': 0, 'B': 1, 'C': 2, 'D': 3}
>>> df.fillna(value=values)
    A   B   C   D
0   0.0 2.0 2.0 0
1   3.0 4.0 2.0 1
2   0.0 1.0 2.0 5
3   0.0 3.0 2.0 4

Only replace the first NaN element.

>>> df.fillna(value=values, limit=1)
    A   B   C   D
0   0.0 2.0 2.0 0
1   3.0 4.0 NaN 1
2   NaN 1.0 NaN 5
3   NaN 3.0 NaN 4
pop(item: Optional[Hashable])pandas.core.series.Series[source]

Return item and drop from frame. Raise KeyError if not found.

itemlabel

Label of column to be popped.

Series

>>> df = pd.DataFrame([('falcon', 'bird', 389.0),
...                    ('parrot', 'bird', 24.0),
...                    ('lion', 'mammal', 80.5),
...                    ('monkey', 'mammal', np.nan)],
...                   columns=('name', 'class', 'max_speed'))
>>> df
     name   class  max_speed
0  falcon    bird      389.0
1  parrot    bird       24.0
2    lion  mammal       80.5
3  monkey  mammal        NaN
>>> df.pop('class')
0      bird
1      bird
2    mammal
3    mammal
Name: class, dtype: object
>>> df
     name  max_speed
0  falcon      389.0
1  parrot       24.0
2    lion       80.5
3  monkey        NaN
replace(to_replace=None, value=None, inplace=False, limit=None, regex=False, method='pad')[source]

Replace values given in to_replace with value.

Values of the DataFrame are replaced with other values dynamically. This differs from updating with .loc or .iloc, which require you to specify a location to update with some value.

to_replacestr, regex, list, dict, Series, int, float, or None

How to find the values that will be replaced.

  • numeric, str or regex:

    • numeric: numeric values equal to to_replace will be replaced with value

    • str: string exactly matching to_replace will be replaced with value

    • regex: regexs matching to_replace will be replaced with value

  • list of str, regex, or numeric:

    • First, if to_replace and value are both lists, they must be the same length.

    • Second, if regex=True then all of the strings in both lists will be interpreted as regexs otherwise they will match directly. This doesn’t matter much for value since there are only a few possible substitution regexes you can use.

    • str, regex and numeric rules apply as above.

  • dict:

    • Dicts can be used to specify different replacement values for different existing values. For example, {'a': 'b', 'y': 'z'} replaces the value ‘a’ with ‘b’ and ‘y’ with ‘z’. To use a dict in this way the value parameter should be None.

    • For a DataFrame a dict can specify that different values should be replaced in different columns. For example, {'a': 1, 'b': 'z'} looks for the value 1 in column ‘a’ and the value ‘z’ in column ‘b’ and replaces these values with whatever is specified in value. The value parameter should not be None in this case. You can treat this as a special case of passing two lists except that you are specifying the column to search in.

    • For a DataFrame nested dictionaries, e.g., {'a': {'b': np.nan}}, are read as follows: look in column ‘a’ for the value ‘b’ and replace it with NaN. The value parameter should be None to use a nested dict in this way. You can nest regular expressions as well. Note that column names (the top-level dictionary keys in a nested dictionary) cannot be regular expressions.

  • None:

    • This means that the regex argument must be a string, compiled regular expression, or list, dict, ndarray or Series of such elements. If value is also None then this must be a nested dictionary or Series.

See the examples section for examples of each of these.

valuescalar, dict, list, str, regex, default None

Value to replace any values matching to_replace with. For a DataFrame a dict of values can be used to specify which value to use for each column (columns not in the dict will not be filled). Regular expressions, strings and lists or dicts of such objects are also allowed.

inplacebool, default False

If True, in place. Note: this will modify any other views on this object (e.g. a column from a DataFrame). Returns the caller if this is True.

limitint or None, default None

Maximum size gap to forward or backward fill.

regexbool or same types as to_replace, default False

Whether to interpret to_replace and/or value as regular expressions. If this is True then to_replace must be a string. Alternatively, this could be a regular expression or a list, dict, or array of regular expressions in which case to_replace must be None.

method{‘pad’, ‘ffill’, ‘bfill’, None}

The method to use when for replacement, when to_replace is a scalar, list or tuple and value is None.

DataFrame or None

Object after replacement or None if inplace=True.

AssertionError
  • If regex is not a bool and to_replace is not None.

TypeError
  • If to_replace is not a scalar, array-like, dict, or None

  • If to_replace is a dict and value is not a list, dict, ndarray, or Series

  • If to_replace is None and regex is not compilable into a regular expression or is a list, dict, ndarray, or Series.

  • When replacing multiple bool or datetime64 objects and the arguments to to_replace does not match the type of the value being replaced

ValueError
  • If a list or an ndarray is passed to to_replace and value but they are not the same length.

DataFrame.fillna : Fill NA values. DataFrame.where : Replace values based on boolean condition. Series.str.replace : Simple string replacement.

  • Regex substitution is performed under the hood with re.sub. The rules for substitution for re.sub are the same.

  • Regular expressions will only substitute on strings, meaning you cannot provide, for example, a regular expression matching floating point numbers and expect the columns in your frame that have a numeric dtype to be matched. However, if those floating point numbers are strings, then you can do this.

  • This method has a lot of options. You are encouraged to experiment and play with this method to gain intuition about how it works.

  • When dict is used as the to_replace value, it is like key(s) in the dict are the to_replace part and value(s) in the dict are the value parameter.

Scalar `to_replace` and `value`

>>> s = pd.Series([0, 1, 2, 3, 4])
>>> s.replace(0, 5)
0    5
1    1
2    2
3    3
4    4
dtype: int64
>>> df = pd.DataFrame({'A': [0, 1, 2, 3, 4],
...                    'B': [5, 6, 7, 8, 9],
...                    'C': ['a', 'b', 'c', 'd', 'e']})
>>> df.replace(0, 5)
   A  B  C
0  5  5  a
1  1  6  b
2  2  7  c
3  3  8  d
4  4  9  e

List-like `to_replace`

>>> df.replace([0, 1, 2, 3], 4)
   A  B  C
0  4  5  a
1  4  6  b
2  4  7  c
3  4  8  d
4  4  9  e
>>> df.replace([0, 1, 2, 3], [4, 3, 2, 1])
   A  B  C
0  4  5  a
1  3  6  b
2  2  7  c
3  1  8  d
4  4  9  e
>>> s.replace([1, 2], method='bfill')
0    0
1    3
2    3
3    3
4    4
dtype: int64

dict-like `to_replace`

>>> df.replace({0: 10, 1: 100})
     A  B  C
0   10  5  a
1  100  6  b
2    2  7  c
3    3  8  d
4    4  9  e
>>> df.replace({'A': 0, 'B': 5}, 100)
     A    B  C
0  100  100  a
1    1    6  b
2    2    7  c
3    3    8  d
4    4    9  e
>>> df.replace({'A': {0: 100, 4: 400}})
     A  B  C
0  100  5  a
1    1  6  b
2    2  7  c
3    3  8  d
4  400  9  e

Regular expression `to_replace`

>>> df = pd.DataFrame({'A': ['bat', 'foo', 'bait'],
...                    'B': ['abc', 'bar', 'xyz']})
>>> df.replace(to_replace=r'^ba.$', value='new', regex=True)
      A    B
0   new  abc
1   foo  new
2  bait  xyz
>>> df.replace({'A': r'^ba.$'}, {'A': 'new'}, regex=True)
      A    B
0   new  abc
1   foo  bar
2  bait  xyz
>>> df.replace(regex=r'^ba.$', value='new')
      A    B
0   new  abc
1   foo  new
2  bait  xyz
>>> df.replace(regex={r'^ba.$': 'new', 'foo': 'xyz'})
      A    B
0   new  abc
1   xyz  new
2  bait  xyz
>>> df.replace(regex=[r'^ba.$', 'foo'], value='new')
      A    B
0   new  abc
1   new  new
2  bait  xyz

Compare the behavior of s.replace({'a': None}) and s.replace('a', None) to understand the peculiarities of the to_replace parameter:

>>> s = pd.Series([10, 'a', 'a', 'b', 'a'])

When one uses a dict as the to_replace value, it is like the value(s) in the dict are equal to the value parameter. s.replace({'a': None}) is equivalent to s.replace(to_replace={'a': None}, value=None, method=None):

>>> s.replace({'a': None})
0      10
1    None
2    None
3       b
4    None
dtype: object

When value=None and to_replace is a scalar, list or tuple, replace uses the method parameter (default ‘pad’) to do the replacement. So this is why the ‘a’ values are being replaced by 10 in rows 1 and 2 and ‘b’ in row 4 in this case. The command s.replace('a', None) is actually equivalent to s.replace(to_replace='a', value=None, method='pad'):

>>> s.replace('a', None)
0    10
1    10
2    10
3     b
4     b
dtype: object
shift(periods=1, freq=None, axis=0, fill_value=<object object>)pandas.core.frame.DataFrame[source]

Shift index by desired number of periods with an optional time freq.

When freq is not passed, shift the index without realigning the data. If freq is passed (in this case, the index must be date or datetime, or it will raise a NotImplementedError), the index will be increased using the periods and the freq. freq can be inferred when specified as “infer” as long as either freq or inferred_freq attribute is set in the index.

periodsint

Number of periods to shift. Can be positive or negative.

freqDateOffset, tseries.offsets, timedelta, or str, optional

Offset to use from the tseries module or time rule (e.g. ‘EOM’). If freq is specified then the index values are shifted but the data is not realigned. That is, use freq if you would like to extend the index when shifting and preserve the original data. If freq is specified as “infer” then it will be inferred from the freq or inferred_freq attributes of the index. If neither of those attributes exist, a ValueError is thrown.

axis{0 or ‘index’, 1 or ‘columns’, None}, default None

Shift direction.

fill_valueobject, optional

The scalar value to use for newly introduced missing values. the default depends on the dtype of self. For numeric data, np.nan is used. For datetime, timedelta, or period data, etc. NaT is used. For extension dtypes, self.dtype.na_value is used.

Changed in version 1.1.0.

DataFrame

Copy of input object, shifted.

Index.shift : Shift values of Index. DatetimeIndex.shift : Shift values of DatetimeIndex. PeriodIndex.shift : Shift values of PeriodIndex. tshift : Shift the time index, using the index’s frequency if

available.

>>> df = pd.DataFrame({"Col1": [10, 20, 15, 30, 45],
...                    "Col2": [13, 23, 18, 33, 48],
...                    "Col3": [17, 27, 22, 37, 52]},
...                   index=pd.date_range("2020-01-01", "2020-01-05"))
>>> df
            Col1  Col2  Col3
2020-01-01    10    13    17
2020-01-02    20    23    27
2020-01-03    15    18    22
2020-01-04    30    33    37
2020-01-05    45    48    52
>>> df.shift(periods=3)
            Col1  Col2  Col3
2020-01-01   NaN   NaN   NaN
2020-01-02   NaN   NaN   NaN
2020-01-03   NaN   NaN   NaN
2020-01-04  10.0  13.0  17.0
2020-01-05  20.0  23.0  27.0
>>> df.shift(periods=1, axis="columns")
            Col1  Col2  Col3
2020-01-01   NaN    10    13
2020-01-02   NaN    20    23
2020-01-03   NaN    15    18
2020-01-04   NaN    30    33
2020-01-05   NaN    45    48
>>> df.shift(periods=3, fill_value=0)
            Col1  Col2  Col3
2020-01-01     0     0     0
2020-01-02     0     0     0
2020-01-03     0     0     0
2020-01-04    10    13    17
2020-01-05    20    23    27
>>> df.shift(periods=3, freq="D")
            Col1  Col2  Col3
2020-01-04    10    13    17
2020-01-05    20    23    27
2020-01-06    15    18    22
2020-01-07    30    33    37
2020-01-08    45    48    52
>>> df.shift(periods=3, freq="infer")
            Col1  Col2  Col3
2020-01-04    10    13    17
2020-01-05    20    23    27
2020-01-06    15    18    22
2020-01-07    30    33    37
2020-01-08    45    48    52
set_index(keys, drop=True, append=False, inplace=False, verify_integrity=False)[source]

Set the DataFrame index using existing columns.

Set the DataFrame index (row labels) using one or more existing columns or arrays (of the correct length). The index can replace the existing index or expand on it.

keyslabel or array-like or list of labels/arrays

This parameter can be either a single column key, a single array of the same length as the calling DataFrame, or a list containing an arbitrary combination of column keys and arrays. Here, “array” encompasses Series, Index, np.ndarray, and instances of Iterator.

dropbool, default True

Delete columns to be used as the new index.

appendbool, default False

Whether to append columns to existing index.

inplacebool, default False

If True, modifies the DataFrame in place (do not create a new object).

verify_integritybool, default False

Check the new index for duplicates. Otherwise defer the check until necessary. Setting to False will improve the performance of this method.

DataFrame or None

Changed row labels or None if inplace=True.

DataFrame.reset_index : Opposite of set_index. DataFrame.reindex : Change to new indices or expand indices. DataFrame.reindex_like : Change to same indices as other DataFrame.

>>> df = pd.DataFrame({'month': [1, 4, 7, 10],
...                    'year': [2012, 2014, 2013, 2014],
...                    'sale': [55, 40, 84, 31]})
>>> df
   month  year  sale
0      1  2012    55
1      4  2014    40
2      7  2013    84
3     10  2014    31

Set the index to become the ‘month’ column:

>>> df.set_index('month')
       year  sale
month
1      2012    55
4      2014    40
7      2013    84
10     2014    31

Create a MultiIndex using columns ‘year’ and ‘month’:

>>> df.set_index(['year', 'month'])
            sale
year  month
2012  1     55
2014  4     40
2013  7     84
2014  10    31

Create a MultiIndex using an Index and a column:

>>> df.set_index([pd.Index([1, 2, 3, 4]), 'year'])
         month  sale
   year
1  2012  1      55
2  2014  4      40
3  2013  7      84
4  2014  10     31

Create a MultiIndex using two Series:

>>> s = pd.Series([1, 2, 3, 4])
>>> df.set_index([s, s**2])
      month  year  sale
1 1       1  2012    55
2 4       4  2014    40
3 9       7  2013    84
4 16     10  2014    31
reset_index(level: Optional[Union[Hashable, Sequence[Hashable]]] = None, drop: bool = False, inplace: Literal[False] = False, col_level: Hashable = 0, col_fill: Optional[Hashable] = '')pandas.core.frame.DataFrame[source]
reset_index(level: Optional[Union[Hashable, Sequence[Hashable]]] = None, drop: bool = False, inplace: Literal[True] = False, col_level: Hashable = 0, col_fill: Optional[Hashable] = '')None

Reset the index, or a level of it.

Reset the index of the DataFrame, and use the default one instead. If the DataFrame has a MultiIndex, this method can remove one or more levels.

levelint, str, tuple, or list, default None

Only remove the given levels from the index. Removes all levels by default.

dropbool, default False

Do not try to insert index into dataframe columns. This resets the index to the default integer index.

inplacebool, default False

Modify the DataFrame in place (do not create a new object).

col_levelint or str, default 0

If the columns have multiple levels, determines which level the labels are inserted into. By default it is inserted into the first level.

col_fillobject, default ‘’

If the columns have multiple levels, determines how the other levels are named. If None then the index name is repeated.

DataFrame or None

DataFrame with the new index or None if inplace=True.

DataFrame.set_index : Opposite of reset_index. DataFrame.reindex : Change to new indices or expand indices. DataFrame.reindex_like : Change to same indices as other DataFrame.

>>> df = pd.DataFrame([('bird', 389.0),
...                    ('bird', 24.0),
...                    ('mammal', 80.5),
...                    ('mammal', np.nan)],
...                   index=['falcon', 'parrot', 'lion', 'monkey'],
...                   columns=('class', 'max_speed'))
>>> df
         class  max_speed
falcon    bird      389.0
parrot    bird       24.0
lion    mammal       80.5
monkey  mammal        NaN

When we reset the index, the old index is added as a column, and a new sequential index is used:

>>> df.reset_index()
    index   class  max_speed
0  falcon    bird      389.0
1  parrot    bird       24.0
2    lion  mammal       80.5
3  monkey  mammal        NaN

We can use the drop parameter to avoid the old index being added as a column:

>>> df.reset_index(drop=True)
    class  max_speed
0    bird      389.0
1    bird       24.0
2  mammal       80.5
3  mammal        NaN

You can also use reset_index with MultiIndex.

>>> index = pd.MultiIndex.from_tuples([('bird', 'falcon'),
...                                    ('bird', 'parrot'),
...                                    ('mammal', 'lion'),
...                                    ('mammal', 'monkey')],
...                                   names=['class', 'name'])
>>> columns = pd.MultiIndex.from_tuples([('speed', 'max'),
...                                      ('species', 'type')])
>>> df = pd.DataFrame([(389.0, 'fly'),
...                    ( 24.0, 'fly'),
...                    ( 80.5, 'run'),
...                    (np.nan, 'jump')],
...                   index=index,
...                   columns=columns)
>>> df
               speed species
                 max    type
class  name
bird   falcon  389.0     fly
       parrot   24.0     fly
mammal lion     80.5     run
       monkey    NaN    jump

If the index has multiple levels, we can reset a subset of them:

>>> df.reset_index(level='class')
         class  speed species
                  max    type
name
falcon    bird  389.0     fly
parrot    bird   24.0     fly
lion    mammal   80.5     run
monkey  mammal    NaN    jump

If we are not dropping the index, by default, it is placed in the top level. We can place it in another level:

>>> df.reset_index(level='class', col_level=1)
                speed species
         class    max    type
name
falcon    bird  389.0     fly
parrot    bird   24.0     fly
lion    mammal   80.5     run
monkey  mammal    NaN    jump

When the index is inserted under another level, we can specify under which one with the parameter col_fill:

>>> df.reset_index(level='class', col_level=1, col_fill='species')
              species  speed species
                class    max    type
name
falcon           bird  389.0     fly
parrot           bird   24.0     fly
lion           mammal   80.5     run
monkey         mammal    NaN    jump

If we specify a nonexistent level for col_fill, it is created:

>>> df.reset_index(level='class', col_level=1, col_fill='genus')
                genus  speed species
                class    max    type
name
falcon           bird  389.0     fly
parrot           bird   24.0     fly
lion           mammal   80.5     run
monkey         mammal    NaN    jump
isna()pandas.core.frame.DataFrame[source]

Detect missing values.

Return a boolean same-sized object indicating if the values are NA. NA values, such as None or numpy.NaN, gets mapped to True values. Everything else gets mapped to False values. Characters such as empty strings '' or numpy.inf are not considered NA values (unless you set pandas.options.mode.use_inf_as_na = True).

DataFrame

Mask of bool values for each element in DataFrame that indicates whether an element is an NA value.

DataFrame.isnull : Alias of isna. DataFrame.notna : Boolean inverse of isna. DataFrame.dropna : Omit axes labels with missing values. isna : Top-level isna.

Show which entries in a DataFrame are NA.

>>> df = pd.DataFrame(dict(age=[5, 6, np.NaN],
...                    born=[pd.NaT, pd.Timestamp('1939-05-27'),
...                          pd.Timestamp('1940-04-25')],
...                    name=['Alfred', 'Batman', ''],
...                    toy=[None, 'Batmobile', 'Joker']))
>>> df
   age       born    name        toy
0  5.0        NaT  Alfred       None
1  6.0 1939-05-27  Batman  Batmobile
2  NaN 1940-04-25              Joker
>>> df.isna()
     age   born   name    toy
0  False   True  False   True
1  False  False  False  False
2   True  False  False  False

Show which entries in a Series are NA.

>>> ser = pd.Series([5, 6, np.NaN])
>>> ser
0    5.0
1    6.0
2    NaN
dtype: float64
>>> ser.isna()
0    False
1    False
2     True
dtype: bool
isnull()pandas.core.frame.DataFrame[source]

Detect missing values.

Return a boolean same-sized object indicating if the values are NA. NA values, such as None or numpy.NaN, gets mapped to True values. Everything else gets mapped to False values. Characters such as empty strings '' or numpy.inf are not considered NA values (unless you set pandas.options.mode.use_inf_as_na = True).

DataFrame

Mask of bool values for each element in DataFrame that indicates whether an element is an NA value.

DataFrame.isnull : Alias of isna. DataFrame.notna : Boolean inverse of isna. DataFrame.dropna : Omit axes labels with missing values. isna : Top-level isna.

Show which entries in a DataFrame are NA.

>>> df = pd.DataFrame(dict(age=[5, 6, np.NaN],
...                    born=[pd.NaT, pd.Timestamp('1939-05-27'),
...                          pd.Timestamp('1940-04-25')],
...                    name=['Alfred', 'Batman', ''],
...                    toy=[None, 'Batmobile', 'Joker']))
>>> df
   age       born    name        toy
0  5.0        NaT  Alfred       None
1  6.0 1939-05-27  Batman  Batmobile
2  NaN 1940-04-25              Joker
>>> df.isna()
     age   born   name    toy
0  False   True  False   True
1  False  False  False  False
2   True  False  False  False

Show which entries in a Series are NA.

>>> ser = pd.Series([5, 6, np.NaN])
>>> ser
0    5.0
1    6.0
2    NaN
dtype: float64
>>> ser.isna()
0    False
1    False
2     True
dtype: bool
notna()pandas.core.frame.DataFrame[source]

Detect existing (non-missing) values.

Return a boolean same-sized object indicating if the values are not NA. Non-missing values get mapped to True. Characters such as empty strings '' or numpy.inf are not considered NA values (unless you set pandas.options.mode.use_inf_as_na = True). NA values, such as None or numpy.NaN, get mapped to False values.

DataFrame

Mask of bool values for each element in DataFrame that indicates whether an element is not an NA value.

DataFrame.notnull : Alias of notna. DataFrame.isna : Boolean inverse of notna. DataFrame.dropna : Omit axes labels with missing values. notna : Top-level notna.

Show which entries in a DataFrame are not NA.

>>> df = pd.DataFrame(dict(age=[5, 6, np.NaN],
...                    born=[pd.NaT, pd.Timestamp('1939-05-27'),
...                          pd.Timestamp('1940-04-25')],
...                    name=['Alfred', 'Batman', ''],
...                    toy=[None, 'Batmobile', 'Joker']))
>>> df
   age       born    name        toy
0  5.0        NaT  Alfred       None
1  6.0 1939-05-27  Batman  Batmobile
2  NaN 1940-04-25              Joker
>>> df.notna()
     age   born  name    toy
0   True  False  True  False
1   True   True  True   True
2  False   True  True   True

Show which entries in a Series are not NA.

>>> ser = pd.Series([5, 6, np.NaN])
>>> ser
0    5.0
1    6.0
2    NaN
dtype: float64
>>> ser.notna()
0     True
1     True
2    False
dtype: bool
notnull()pandas.core.frame.DataFrame[source]

Detect existing (non-missing) values.

Return a boolean same-sized object indicating if the values are not NA. Non-missing values get mapped to True. Characters such as empty strings '' or numpy.inf are not considered NA values (unless you set pandas.options.mode.use_inf_as_na = True). NA values, such as None or numpy.NaN, get mapped to False values.

DataFrame

Mask of bool values for each element in DataFrame that indicates whether an element is not an NA value.

DataFrame.notnull : Alias of notna. DataFrame.isna : Boolean inverse of notna. DataFrame.dropna : Omit axes labels with missing values. notna : Top-level notna.

Show which entries in a DataFrame are not NA.

>>> df = pd.DataFrame(dict(age=[5, 6, np.NaN],
...                    born=[pd.NaT, pd.Timestamp('1939-05-27'),
...                          pd.Timestamp('1940-04-25')],
...                    name=['Alfred', 'Batman', ''],
...                    toy=[None, 'Batmobile', 'Joker']))
>>> df
   age       born    name        toy
0  5.0        NaT  Alfred       None
1  6.0 1939-05-27  Batman  Batmobile
2  NaN 1940-04-25              Joker
>>> df.notna()
     age   born  name    toy
0   True  False  True  False
1   True   True  True   True
2  False   True  True   True

Show which entries in a Series are not NA.

>>> ser = pd.Series([5, 6, np.NaN])
>>> ser
0    5.0
1    6.0
2    NaN
dtype: float64
>>> ser.notna()
0     True
1     True
2    False
dtype: bool
dropna(axis=0, how='any', thresh=None, subset=None, inplace=False)[source]

Remove missing values.

See the User Guide for more on which values are considered missing, and how to work with missing data.

axis{0 or ‘index’, 1 or ‘columns’}, default 0

Determine if rows or columns which contain missing values are removed.

  • 0, or ‘index’ : Drop rows which contain missing values.

  • 1, or ‘columns’ : Drop columns which contain missing value.

Changed in version 1.0.0: Pass tuple or list to drop on multiple axes. Only a single axis is allowed.

how{‘any’, ‘all’}, default ‘any’

Determine if row or column is removed from DataFrame, when we have at least one NA or all NA.

  • ‘any’ : If any NA values are present, drop that row or column.

  • ‘all’ : If all values are NA, drop that row or column.

threshint, optional

Require that many non-NA values.

subsetarray-like, optional

Labels along other axis to consider, e.g. if you are dropping rows these would be a list of columns to include.

inplacebool, default False

If True, do operation inplace and return None.

DataFrame or None

DataFrame with NA entries dropped from it or None if inplace=True.

DataFrame.isna: Indicate missing values. DataFrame.notna : Indicate existing (non-missing) values. DataFrame.fillna : Replace missing values. Series.dropna : Drop missing values. Index.dropna : Drop missing indices.

>>> df = pd.DataFrame({"name": ['Alfred', 'Batman', 'Catwoman'],
...                    "toy": [np.nan, 'Batmobile', 'Bullwhip'],
...                    "born": [pd.NaT, pd.Timestamp("1940-04-25"),
...                             pd.NaT]})
>>> df
       name        toy       born
0    Alfred        NaN        NaT
1    Batman  Batmobile 1940-04-25
2  Catwoman   Bullwhip        NaT

Drop the rows where at least one element is missing.

>>> df.dropna()
     name        toy       born
1  Batman  Batmobile 1940-04-25

Drop the columns where at least one element is missing.

>>> df.dropna(axis='columns')
       name
0    Alfred
1    Batman
2  Catwoman

Drop the rows where all elements are missing.

>>> df.dropna(how='all')
       name        toy       born
0    Alfred        NaN        NaT
1    Batman  Batmobile 1940-04-25
2  Catwoman   Bullwhip        NaT

Keep only the rows with at least 2 non-NA values.

>>> df.dropna(thresh=2)
       name        toy       born
1    Batman  Batmobile 1940-04-25
2  Catwoman   Bullwhip        NaT

Define in which columns to look for missing values.

>>> df.dropna(subset=['name', 'toy'])
       name        toy       born
1    Batman  Batmobile 1940-04-25
2  Catwoman   Bullwhip        NaT

Keep the DataFrame with valid entries in the same variable.

>>> df.dropna(inplace=True)
>>> df
     name        toy       born
1  Batman  Batmobile 1940-04-25
drop_duplicates(subset: Optional[Union[Hashable, Sequence[Hashable]]] = None, keep: Union[str, bool] = 'first', inplace: bool = False, ignore_index: bool = False)Optional[pandas.core.frame.DataFrame][source]

Return DataFrame with duplicate rows removed.

Considering certain columns is optional. Indexes, including time indexes are ignored.

subsetcolumn label or sequence of labels, optional

Only consider certain columns for identifying duplicates, by default use all of the columns.

keep{‘first’, ‘last’, False}, default ‘first’

Determines which duplicates (if any) to keep. - first : Drop duplicates except for the first occurrence. - last : Drop duplicates except for the last occurrence. - False : Drop all duplicates.

inplacebool, default False

Whether to drop duplicates in place or to return a copy.

ignore_indexbool, default False

If True, the resulting axis will be labeled 0, 1, …, n - 1.

New in version 1.0.0.

DataFrame or None

DataFrame with duplicates removed or None if inplace=True.

DataFrame.value_counts: Count unique combinations of columns.

Consider dataset containing ramen rating.

>>> df = pd.DataFrame({
...     'brand': ['Yum Yum', 'Yum Yum', 'Indomie', 'Indomie', 'Indomie'],
...     'style': ['cup', 'cup', 'cup', 'pack', 'pack'],
...     'rating': [4, 4, 3.5, 15, 5]
... })
>>> df
    brand style  rating
0  Yum Yum   cup     4.0
1  Yum Yum   cup     4.0
2  Indomie   cup     3.5
3  Indomie  pack    15.0
4  Indomie  pack     5.0

By default, it removes duplicate rows based on all columns.

>>> df.drop_duplicates()
    brand style  rating
0  Yum Yum   cup     4.0
2  Indomie   cup     3.5
3  Indomie  pack    15.0
4  Indomie  pack     5.0

To remove duplicates on specific column(s), use subset.

>>> df.drop_duplicates(subset=['brand'])
    brand style  rating
0  Yum Yum   cup     4.0
2  Indomie   cup     3.5

To remove duplicates and keep last occurrences, use keep.

>>> df.drop_duplicates(subset=['brand', 'style'], keep='last')
    brand style  rating
1  Yum Yum   cup     4.0
2  Indomie   cup     3.5
4  Indomie  pack     5.0
duplicated(subset: Optional[Union[Hashable, Sequence[Hashable]]] = None, keep: Union[str, bool] = 'first')pandas.core.series.Series[source]

Return boolean Series denoting duplicate rows.

Considering certain columns is optional.

subsetcolumn label or sequence of labels, optional

Only consider certain columns for identifying duplicates, by default use all of the columns.

keep{‘first’, ‘last’, False}, default ‘first’

Determines which duplicates (if any) to mark.

  • first : Mark duplicates as True except for the first occurrence.

  • last : Mark duplicates as True except for the last occurrence.

  • False : Mark all duplicates as True.

Series

Boolean series for each duplicated rows.

Index.duplicated : Equivalent method on index. Series.duplicated : Equivalent method on Series. Series.drop_duplicates : Remove duplicate values from Series. DataFrame.drop_duplicates : Remove duplicate values from DataFrame.

Consider dataset containing ramen rating.

>>> df = pd.DataFrame({
...     'brand': ['Yum Yum', 'Yum Yum', 'Indomie', 'Indomie', 'Indomie'],
...     'style': ['cup', 'cup', 'cup', 'pack', 'pack'],
...     'rating': [4, 4, 3.5, 15, 5]
... })
>>> df
    brand style  rating
0  Yum Yum   cup     4.0
1  Yum Yum   cup     4.0
2  Indomie   cup     3.5
3  Indomie  pack    15.0
4  Indomie  pack     5.0

By default, for each set of duplicated values, the first occurrence is set on False and all others on True.

>>> df.duplicated()
0    False
1     True
2    False
3    False
4    False
dtype: bool

By using ‘last’, the last occurrence of each set of duplicated values is set on False and all others on True.

>>> df.duplicated(keep='last')
0     True
1    False
2    False
3    False
4    False
dtype: bool

By setting keep on False, all duplicates are True.

>>> df.duplicated(keep=False)
0     True
1     True
2    False
3    False
4    False
dtype: bool

To find duplicates on specific column(s), use subset.

>>> df.duplicated(subset=['brand'])
0    False
1     True
2    False
3     True
4     True
dtype: bool
sort_values(by, axis=0, ascending=True, inplace=False, kind='quicksort', na_position='last', ignore_index=False, key: Optional[Callable[[Series], Union[Series, AnyArrayLike]]] = None)[source]

Sort by the values along either axis.

bystr or list of str

Name or list of names to sort by.

  • if axis is 0 or ‘index’ then by may contain index levels and/or column labels.

  • if axis is 1 or ‘columns’ then by may contain column levels and/or index labels.

axis{0 or ‘index’, 1 or ‘columns’}, default 0

Axis to be sorted.

ascendingbool or list of bool, default True

Sort ascending vs. descending. Specify list for multiple sort orders. If this is a list of bools, must match the length of the by.

inplacebool, default False

If True, perform operation in-place.

kind{‘quicksort’, ‘mergesort’, ‘heapsort’}, default ‘quicksort’

Choice of sorting algorithm. See also ndarray.np.sort for more information. mergesort is the only stable algorithm. For DataFrames, this option is only applied when sorting on a single column or label.

na_position{‘first’, ‘last’}, default ‘last’

Puts NaNs at the beginning if first; last puts NaNs at the end.

ignore_indexbool, default False

If True, the resulting axis will be labeled 0, 1, …, n - 1.

New in version 1.0.0.

keycallable, optional

Apply the key function to the values before sorting. This is similar to the key argument in the builtin sorted() function, with the notable difference that this key function should be vectorized. It should expect a Series and return a Series with the same shape as the input. It will be applied to each column in by independently.

New in version 1.1.0.

DataFrame or None

DataFrame with sorted values or None if inplace=True.

DataFrame.sort_index : Sort a DataFrame by the index. Series.sort_values : Similar method for a Series.

>>> df = pd.DataFrame({
...     'col1': ['A', 'A', 'B', np.nan, 'D', 'C'],
...     'col2': [2, 1, 9, 8, 7, 4],
...     'col3': [0, 1, 9, 4, 2, 3],
...     'col4': ['a', 'B', 'c', 'D', 'e', 'F']
... })
>>> df
  col1  col2  col3 col4
0    A     2     0    a
1    A     1     1    B
2    B     9     9    c
3  NaN     8     4    D
4    D     7     2    e
5    C     4     3    F

Sort by col1

>>> df.sort_values(by=['col1'])
  col1  col2  col3 col4
0    A     2     0    a
1    A     1     1    B
2    B     9     9    c
5    C     4     3    F
4    D     7     2    e
3  NaN     8     4    D

Sort by multiple columns

>>> df.sort_values(by=['col1', 'col2'])
  col1  col2  col3 col4
1    A     1     1    B
0    A     2     0    a
2    B     9     9    c
5    C     4     3    F
4    D     7     2    e
3  NaN     8     4    D

Sort Descending

>>> df.sort_values(by='col1', ascending=False)
  col1  col2  col3 col4
4    D     7     2    e
5    C     4     3    F
2    B     9     9    c
0    A     2     0    a
1    A     1     1    B
3  NaN     8     4    D

Putting NAs first

>>> df.sort_values(by='col1', ascending=False, na_position='first')
  col1  col2  col3 col4
3  NaN     8     4    D
4    D     7     2    e
5    C     4     3    F
2    B     9     9    c
0    A     2     0    a
1    A     1     1    B

Sorting with a key function

>>> df.sort_values(by='col4', key=lambda col: col.str.lower())
   col1  col2  col3 col4
0    A     2     0    a
1    A     1     1    B
2    B     9     9    c
3  NaN     8     4    D
4    D     7     2    e
5    C     4     3    F

Natural sort with the key argument, using the natsort <https://github.com/SethMMorton/natsort> package.

>>> df = pd.DataFrame({
...    "time": ['0hr', '128hr', '72hr', '48hr', '96hr'],
...    "value": [10, 20, 30, 40, 50]
... })
>>> df
    time  value
0    0hr     10
1  128hr     20
2   72hr     30
3   48hr     40
4   96hr     50
>>> from natsort import index_natsorted
>>> df.sort_values(
...    by="time",
...    key=lambda x: np.argsort(index_natsorted(df["time"]))
... )
    time  value
0    0hr     10
3   48hr     40
2   72hr     30
4   96hr     50
1  128hr     20
sort_index(axis=0, level=None, ascending: Union[bool, int, Sequence[Union[bool, int]]] = True, inplace: bool = False, kind: str = 'quicksort', na_position: str = 'last', sort_remaining: bool = True, ignore_index: bool = False, key: Optional[Callable[[Index], Union[Index, AnyArrayLike]]] = None)[source]

Sort object by labels (along an axis).

Returns a new DataFrame sorted by label if inplace argument is False, otherwise updates the original DataFrame and returns None.

axis{0 or ‘index’, 1 or ‘columns’}, default 0

The axis along which to sort. The value 0 identifies the rows, and 1 identifies the columns.

levelint or level name or list of ints or list of level names

If not None, sort on values in specified index level(s).

ascendingbool or list-like of bools, default True

Sort ascending vs. descending. When the index is a MultiIndex the sort direction can be controlled for each level individually.

inplacebool, default False

If True, perform operation in-place.

kind{‘quicksort’, ‘mergesort’, ‘heapsort’}, default ‘quicksort’

Choice of sorting algorithm. See also ndarray.np.sort for more information. mergesort is the only stable algorithm. For DataFrames, this option is only applied when sorting on a single column or label.

na_position{‘first’, ‘last’}, default ‘last’

Puts NaNs at the beginning if first; last puts NaNs at the end. Not implemented for MultiIndex.

sort_remainingbool, default True

If True and sorting by level and index is multilevel, sort by other levels too (in order) after sorting by specified level.

ignore_indexbool, default False

If True, the resulting axis will be labeled 0, 1, …, n - 1.

New in version 1.0.0.

keycallable, optional

If not None, apply the key function to the index values before sorting. This is similar to the key argument in the builtin sorted() function, with the notable difference that this key function should be vectorized. It should expect an Index and return an Index of the same shape. For MultiIndex inputs, the key is applied per level.

New in version 1.1.0.

DataFrame or None

The original DataFrame sorted by the labels or None if inplace=True.

Series.sort_index : Sort Series by the index. DataFrame.sort_values : Sort DataFrame by the value. Series.sort_values : Sort Series by the value.

>>> df = pd.DataFrame([1, 2, 3, 4, 5], index=[100, 29, 234, 1, 150],
...                   columns=['A'])
>>> df.sort_index()
     A
1    4
29   2
100  1
150  5
234  3

By default, it sorts in ascending order, to sort in descending order, use ascending=False

>>> df.sort_index(ascending=False)
     A
234  3
150  5
100  1
29   2
1    4

A key function can be specified which is applied to the index before sorting. For a MultiIndex this is applied to each level separately.

>>> df = pd.DataFrame({"a": [1, 2, 3, 4]}, index=['A', 'b', 'C', 'd'])
>>> df.sort_index(key=lambda x: x.str.lower())
   a
A  1
b  2
C  3
d  4
value_counts(subset: Optional[Sequence[Optional[Hashable]]] = None, normalize: bool = False, sort: bool = True, ascending: bool = False)[source]

Return a Series containing counts of unique rows in the DataFrame.

New in version 1.1.0.

subsetlist-like, optional

Columns to use when counting unique combinations.

normalizebool, default False

Return proportions rather than frequencies.

sortbool, default True

Sort by frequencies.

ascendingbool, default False

Sort in ascending order.

Series

Series.value_counts: Equivalent method on Series.

The returned Series will have a MultiIndex with one level per input column. By default, rows that contain any NA values are omitted from the result. By default, the resulting Series will be in descending order so that the first element is the most frequently-occurring row.

>>> df = pd.DataFrame({'num_legs': [2, 4, 4, 6],
...                    'num_wings': [2, 0, 0, 0]},
...                   index=['falcon', 'dog', 'cat', 'ant'])
>>> df
        num_legs  num_wings
falcon         2          2
dog            4          0
cat            4          0
ant            6          0
>>> df.value_counts()
num_legs  num_wings
4         0            2
2         2            1
6         0            1
dtype: int64
>>> df.value_counts(sort=False)
num_legs  num_wings
2         2            1
4         0            2
6         0            1
dtype: int64
>>> df.value_counts(ascending=True)
num_legs  num_wings
2         2            1
6         0            1
4         0            2
dtype: int64
>>> df.value_counts(normalize=True)
num_legs  num_wings
4         0            0.50
2         2            0.25
6         0            0.25
dtype: float64
nlargest(n, columns, keep='first')pandas.core.frame.DataFrame[source]

Return the first n rows ordered by columns in descending order.

Return the first n rows with the largest values in columns, in descending order. The columns that are not specified are returned as well, but not used for ordering.

This method is equivalent to df.sort_values(columns, ascending=False).head(n), but more performant.

nint

Number of rows to return.

columnslabel or list of labels

Column label(s) to order by.

keep{‘first’, ‘last’, ‘all’}, default ‘first’

Where there are duplicate values:

  • first : prioritize the first occurrence(s)

  • last : prioritize the last occurrence(s)

  • alldo not drop any duplicates, even it means

    selecting more than n items.

New in version 0.24.0.

DataFrame

The first n rows ordered by the given columns in descending order.

DataFrame.nsmallestReturn the first n rows ordered by columns in

ascending order.

DataFrame.sort_values : Sort DataFrame by the values. DataFrame.head : Return the first n rows without re-ordering.

This function cannot be used with all column types. For example, when specifying columns with object or category dtypes, TypeError is raised.

>>> df = pd.DataFrame({'population': [59000000, 65000000, 434000,
...                                   434000, 434000, 337000, 11300,
...                                   11300, 11300],
...                    'GDP': [1937894, 2583560 , 12011, 4520, 12128,
...                            17036, 182, 38, 311],
...                    'alpha-2': ["IT", "FR", "MT", "MV", "BN",
...                                "IS", "NR", "TV", "AI"]},
...                   index=["Italy", "France", "Malta",
...                          "Maldives", "Brunei", "Iceland",
...                          "Nauru", "Tuvalu", "Anguilla"])
>>> df
          population      GDP alpha-2
Italy       59000000  1937894      IT
France      65000000  2583560      FR
Malta         434000    12011      MT
Maldives      434000     4520      MV
Brunei        434000    12128      BN
Iceland       337000    17036      IS
Nauru          11300      182      NR
Tuvalu         11300       38      TV
Anguilla       11300      311      AI

In the following example, we will use nlargest to select the three rows having the largest values in column “population”.

>>> df.nlargest(3, 'population')
        population      GDP alpha-2
France    65000000  2583560      FR
Italy     59000000  1937894      IT
Malta       434000    12011      MT

When using keep='last', ties are resolved in reverse order:

>>> df.nlargest(3, 'population', keep='last')
        population      GDP alpha-2
France    65000000  2583560      FR
Italy     59000000  1937894      IT
Brunei      434000    12128      BN

When using keep='all', all duplicate items are maintained:

>>> df.nlargest(3, 'population', keep='all')
          population      GDP alpha-2
France      65000000  2583560      FR
Italy       59000000  1937894      IT
Malta         434000    12011      MT
Maldives      434000     4520      MV
Brunei        434000    12128      BN

To order by the largest values in column “population” and then “GDP”, we can specify multiple columns like in the next example.

>>> df.nlargest(3, ['population', 'GDP'])
        population      GDP alpha-2
France    65000000  2583560      FR
Italy     59000000  1937894      IT
Brunei      434000    12128      BN
nsmallest(n, columns, keep='first')pandas.core.frame.DataFrame[source]

Return the first n rows ordered by columns in ascending order.

Return the first n rows with the smallest values in columns, in ascending order. The columns that are not specified are returned as well, but not used for ordering.

This method is equivalent to df.sort_values(columns, ascending=True).head(n), but more performant.

nint

Number of items to retrieve.

columnslist or str

Column name or names to order by.

keep{‘first’, ‘last’, ‘all’}, default ‘first’

Where there are duplicate values:

  • first : take the first occurrence.

  • last : take the last occurrence.

  • all : do not drop any duplicates, even it means selecting more than n items.

New in version 0.24.0.

DataFrame

DataFrame.nlargestReturn the first n rows ordered by columns in

descending order.

DataFrame.sort_values : Sort DataFrame by the values. DataFrame.head : Return the first n rows without re-ordering.

>>> df = pd.DataFrame({'population': [59000000, 65000000, 434000,
...                                   434000, 434000, 337000, 337000,
...                                   11300, 11300],
...                    'GDP': [1937894, 2583560 , 12011, 4520, 12128,
...                            17036, 182, 38, 311],
...                    'alpha-2': ["IT", "FR", "MT", "MV", "BN",
...                                "IS", "NR", "TV", "AI"]},
...                   index=["Italy", "France", "Malta",
...                          "Maldives", "Brunei", "Iceland",
...                          "Nauru", "Tuvalu", "Anguilla"])
>>> df
          population      GDP alpha-2
Italy       59000000  1937894      IT
France      65000000  2583560      FR
Malta         434000    12011      MT
Maldives      434000     4520      MV
Brunei        434000    12128      BN
Iceland       337000    17036      IS
Nauru         337000      182      NR
Tuvalu         11300       38      TV
Anguilla       11300      311      AI

In the following example, we will use nsmallest to select the three rows having the smallest values in column “population”.

>>> df.nsmallest(3, 'population')
          population    GDP alpha-2
Tuvalu         11300     38      TV
Anguilla       11300    311      AI
Iceland       337000  17036      IS

When using keep='last', ties are resolved in reverse order:

>>> df.nsmallest(3, 'population', keep='last')
          population  GDP alpha-2
Anguilla       11300  311      AI
Tuvalu         11300   38      TV
Nauru         337000  182      NR

When using keep='all', all duplicate items are maintained:

>>> df.nsmallest(3, 'population', keep='all')
          population    GDP alpha-2
Tuvalu         11300     38      TV
Anguilla       11300    311      AI
Iceland       337000  17036      IS
Nauru         337000    182      NR

To order by the smallest values in column “population” and then “GDP”, we can specify multiple columns like in the next example.

>>> df.nsmallest(3, ['population', 'GDP'])
          population  GDP alpha-2
Tuvalu         11300   38      TV
Anguilla       11300  311      AI
Nauru         337000  182      NR
swaplevel(i=- 2, j=- 1, axis=0)pandas.core.frame.DataFrame[source]

Swap levels i and j in a MultiIndex on a particular axis.

i, jint or str

Levels of the indices to be swapped. Can pass level name as string.

axis{0 or ‘index’, 1 or ‘columns’}, default 0

The axis to swap levels on. 0 or ‘index’ for row-wise, 1 or ‘columns’ for column-wise.

DataFrame

reorder_levels(order, axis=0)pandas.core.frame.DataFrame[source]

Rearrange index levels using input order. May not drop or duplicate levels.

orderlist of int or list of str

List representing new level order. Reference level by number (position) or by key (label).

axis{0 or ‘index’, 1 or ‘columns’}, default 0

Where to reorder levels.

DataFrame

compare(other: pandas.core.frame.DataFrame, align_axis: Union[str, int] = 1, keep_shape: bool = False, keep_equal: bool = False)pandas.core.frame.DataFrame[source]

Compare to another DataFrame and show the differences.

New in version 1.1.0.

otherDataFrame

Object to compare with.

align_axis{0 or ‘index’, 1 or ‘columns’}, default 1

Determine which axis to align the comparison on.

  • 0, or ‘index’Resulting differences are stacked vertically

    with rows drawn alternately from self and other.

  • 1, or ‘columns’Resulting differences are aligned horizontally

    with columns drawn alternately from self and other.

keep_shapebool, default False

If true, all rows and columns are kept. Otherwise, only the ones with different values are kept.

keep_equalbool, default False

If true, the result keeps values that are equal. Otherwise, equal values are shown as NaNs.

DataFrame

DataFrame that shows the differences stacked side by side.

The resulting index will be a MultiIndex with ‘self’ and ‘other’ stacked alternately at the inner level.

ValueError

When the two DataFrames don’t have identical labels or shape.

Series.compare : Compare with another Series and show differences. DataFrame.equals : Test whether two objects contain the same elements.

Matching NaNs will not appear as a difference.

Can only compare identically-labeled (i.e. same shape, identical row and column labels) DataFrames

>>> df = pd.DataFrame(
...     {
...         "col1": ["a", "a", "b", "b", "a"],
...         "col2": [1.0, 2.0, 3.0, np.nan, 5.0],
...         "col3": [1.0, 2.0, 3.0, 4.0, 5.0]
...     },
...     columns=["col1", "col2", "col3"],
... )
>>> df
  col1  col2  col3
0    a   1.0   1.0
1    a   2.0   2.0
2    b   3.0   3.0
3    b   NaN   4.0
4    a   5.0   5.0
>>> df2 = df.copy()
>>> df2.loc[0, 'col1'] = 'c'
>>> df2.loc[2, 'col3'] = 4.0
>>> df2
  col1  col2  col3
0    c   1.0   1.0
1    a   2.0   2.0
2    b   3.0   4.0
3    b   NaN   4.0
4    a   5.0   5.0

Align the differences on columns

>>> df.compare(df2)
  col1       col3
  self other self other
0    a     c  NaN   NaN
2  NaN   NaN  3.0   4.0

Stack the differences on rows

>>> df.compare(df2, align_axis=0)
        col1  col3
0 self     a   NaN
  other    c   NaN
2 self   NaN   3.0
  other  NaN   4.0

Keep the equal values

>>> df.compare(df2, keep_equal=True)
  col1       col3
  self other self other
0    a     c  1.0   1.0
2    b     b  3.0   4.0

Keep all original rows and columns

>>> df.compare(df2, keep_shape=True)
  col1       col2       col3
  self other self other self other
0    a     c  NaN   NaN  NaN   NaN
1  NaN   NaN  NaN   NaN  NaN   NaN
2  NaN   NaN  NaN   NaN  3.0   4.0
3  NaN   NaN  NaN   NaN  NaN   NaN
4  NaN   NaN  NaN   NaN  NaN   NaN

Keep all original rows and columns and also all original values

>>> df.compare(df2, keep_shape=True, keep_equal=True)
  col1       col2       col3
  self other self other self other
0    a     c  1.0   1.0  1.0   1.0
1    a     a  2.0   2.0  2.0   2.0
2    b     b  3.0   3.0  3.0   4.0
3    b     b  NaN   NaN  4.0   4.0
4    a     a  5.0   5.0  5.0   5.0
combine(other: pandas.core.frame.DataFrame, func, fill_value=None, overwrite=True)pandas.core.frame.DataFrame[source]

Perform column-wise combine with another DataFrame.

Combines a DataFrame with other DataFrame using func to element-wise combine columns. The row and column indexes of the resulting DataFrame will be the union of the two.

otherDataFrame

The DataFrame to merge column-wise.

funcfunction

Function that takes two series as inputs and return a Series or a scalar. Used to merge the two dataframes column by columns.

fill_valuescalar value, default None

The value to fill NaNs with prior to passing any column to the merge func.

overwritebool, default True

If True, columns in self that do not exist in other will be overwritten with NaNs.

DataFrame

Combination of the provided DataFrames.

DataFrame.combine_firstCombine two DataFrame objects and default to

non-null values in frame calling the method.

Combine using a simple function that chooses the smaller column.

>>> df1 = pd.DataFrame({'A': [0, 0], 'B': [4, 4]})
>>> df2 = pd.DataFrame({'A': [1, 1], 'B': [3, 3]})
>>> take_smaller = lambda s1, s2: s1 if s1.sum() < s2.sum() else s2
>>> df1.combine(df2, take_smaller)
   A  B
0  0  3
1  0  3

Example using a true element-wise combine function.

>>> df1 = pd.DataFrame({'A': [5, 0], 'B': [2, 4]})
>>> df2 = pd.DataFrame({'A': [1, 1], 'B': [3, 3]})
>>> df1.combine(df2, np.minimum)
   A  B
0  1  2
1  0  3

Using fill_value fills Nones prior to passing the column to the merge function.

>>> df1 = pd.DataFrame({'A': [0, 0], 'B': [None, 4]})
>>> df2 = pd.DataFrame({'A': [1, 1], 'B': [3, 3]})
>>> df1.combine(df2, take_smaller, fill_value=-5)
   A    B
0  0 -5.0
1  0  4.0

However, if the same element in both dataframes is None, that None is preserved

>>> df1 = pd.DataFrame({'A': [0, 0], 'B': [None, 4]})
>>> df2 = pd.DataFrame({'A': [1, 1], 'B': [None, 3]})
>>> df1.combine(df2, take_smaller, fill_value=-5)
    A    B
0  0 -5.0
1  0  3.0

Example that demonstrates the use of overwrite and behavior when the axis differ between the dataframes.

>>> df1 = pd.DataFrame({'A': [0, 0], 'B': [4, 4]})
>>> df2 = pd.DataFrame({'B': [3, 3], 'C': [-10, 1], }, index=[1, 2])
>>> df1.combine(df2, take_smaller)
     A    B     C
0  NaN  NaN   NaN
1  NaN  3.0 -10.0
2  NaN  3.0   1.0
>>> df1.combine(df2, take_smaller, overwrite=False)
     A    B     C
0  0.0  NaN   NaN
1  0.0  3.0 -10.0
2  NaN  3.0   1.0

Demonstrating the preference of the passed in dataframe.

>>> df2 = pd.DataFrame({'B': [3, 3], 'C': [1, 1], }, index=[1, 2])
>>> df2.combine(df1, take_smaller)
   A    B   C
0  0.0  NaN NaN
1  0.0  3.0 NaN
2  NaN  3.0 NaN
>>> df2.combine(df1, take_smaller, overwrite=False)
     A    B   C
0  0.0  NaN NaN
1  0.0  3.0 1.0
2  NaN  3.0 1.0
combine_first(other: pandas.core.frame.DataFrame)pandas.core.frame.DataFrame[source]

Update null elements with value in the same location in other.

Combine two DataFrame objects by filling null values in one DataFrame with non-null values from other DataFrame. The row and column indexes of the resulting DataFrame will be the union of the two.

otherDataFrame

Provided DataFrame to use to fill null values.

DataFrame

DataFrame.combinePerform series-wise operation on two DataFrames

using a given function.

>>> df1 = pd.DataFrame({'A': [None, 0], 'B': [None, 4]})
>>> df2 = pd.DataFrame({'A': [1, 1], 'B': [3, 3]})
>>> df1.combine_first(df2)
     A    B
0  1.0  3.0
1  0.0  4.0

Null values still persist if the location of that null value does not exist in other

>>> df1 = pd.DataFrame({'A': [None, 0], 'B': [4, None]})
>>> df2 = pd.DataFrame({'B': [3, 3], 'C': [1, 1]}, index=[1, 2])
>>> df1.combine_first(df2)
     A    B    C
0  NaN  4.0  NaN
1  0.0  3.0  1.0
2  NaN  3.0  1.0
update(other, join='left', overwrite=True, filter_func=None, errors='ignore')None[source]

Modify in place using non-NA values from another DataFrame.

Aligns on indices. There is no return value.

otherDataFrame, or object coercible into a DataFrame

Should have at least one matching index/column label with the original DataFrame. If a Series is passed, its name attribute must be set, and that will be used as the column name to align with the original DataFrame.

join{‘left’}, default ‘left’

Only left join is implemented, keeping the index and columns of the original object.

overwritebool, default True

How to handle non-NA values for overlapping keys:

  • True: overwrite original DataFrame’s values with values from other.

  • False: only update values that are NA in the original DataFrame.

filter_funccallable(1d-array) -> bool 1d-array, optional

Can choose to replace values other than NA. Return True for values that should be updated.

errors{‘raise’, ‘ignore’}, default ‘ignore’

If ‘raise’, will raise a ValueError if the DataFrame and other both contain non-NA data in the same place.

Changed in version 0.24.0: Changed from raise_conflict=False|True to errors=’ignore’|’raise’.

None : method directly changes calling object

ValueError
  • When errors=’raise’ and there’s overlapping non-NA data.

  • When errors is not either ‘ignore’ or ‘raise’

NotImplementedError
  • If join != ‘left’

dict.update : Similar method for dictionaries. DataFrame.merge : For column(s)-on-column(s) operations.

>>> df = pd.DataFrame({'A': [1, 2, 3],
...                    'B': [400, 500, 600]})
>>> new_df = pd.DataFrame({'B': [4, 5, 6],
...                        'C': [7, 8, 9]})
>>> df.update(new_df)
>>> df
   A  B
0  1  4
1  2  5
2  3  6

The DataFrame’s length does not increase as a result of the update, only values at matching index/column labels are updated.

>>> df = pd.DataFrame({'A': ['a', 'b', 'c'],
...                    'B': ['x', 'y', 'z']})
>>> new_df = pd.DataFrame({'B': ['d', 'e', 'f', 'g', 'h', 'i']})
>>> df.update(new_df)
>>> df
   A  B
0  a  d
1  b  e
2  c  f

For Series, its name attribute must be set.

>>> df = pd.DataFrame({'A': ['a', 'b', 'c'],
...                    'B': ['x', 'y', 'z']})
>>> new_column = pd.Series(['d', 'e'], name='B', index=[0, 2])
>>> df.update(new_column)
>>> df
   A  B
0  a  d
1  b  y
2  c  e
>>> df = pd.DataFrame({'A': ['a', 'b', 'c'],
...                    'B': ['x', 'y', 'z']})
>>> new_df = pd.DataFrame({'B': ['d', 'e']}, index=[1, 2])
>>> df.update(new_df)
>>> df
   A  B
0  a  x
1  b  d
2  c  e

If other contains NaNs the corresponding values are not updated in the original dataframe.

>>> df = pd.DataFrame({'A': [1, 2, 3],
...                    'B': [400, 500, 600]})
>>> new_df = pd.DataFrame({'B': [4, np.nan, 6]})
>>> df.update(new_df)
>>> df
   A      B
0  1    4.0
1  2  500.0
2  3    6.0
groupby(by=None, axis=0, level=None, as_index: bool = True, sort: bool = True, group_keys: bool = True, squeeze: bool = <object object>, observed: bool = False, dropna: bool = True)DataFrameGroupBy[source]

Group DataFrame using a mapper or by a Series of columns.

A groupby operation involves some combination of splitting the object, applying a function, and combining the results. This can be used to group large amounts of data and compute operations on these groups.

bymapping, function, label, or list of labels

Used to determine the groups for the groupby. If by is a function, it’s called on each value of the object’s index. If a dict or Series is passed, the Series or dict VALUES will be used to determine the groups (the Series’ values are first aligned; see .align() method). If an ndarray is passed, the values are used as-is to determine the groups. A label or list of labels may be passed to group by the columns in self. Notice that a tuple is interpreted as a (single) key.

axis{0 or ‘index’, 1 or ‘columns’}, default 0

Split along rows (0) or columns (1).

levelint, level name, or sequence of such, default None

If the axis is a MultiIndex (hierarchical), group by a particular level or levels.

as_indexbool, default True

For aggregated output, return object with group labels as the index. Only relevant for DataFrame input. as_index=False is effectively “SQL-style” grouped output.

sortbool, default True

Sort group keys. Get better performance by turning this off. Note this does not influence the order of observations within each group. Groupby preserves the order of rows within each group.

group_keysbool, default True

When calling apply, add group keys to index to identify pieces.

squeezebool, default False

Reduce the dimensionality of the return type if possible, otherwise return a consistent type.

Deprecated since version 1.1.0.

observedbool, default False

This only applies if any of the groupers are Categoricals. If True: only show observed values for categorical groupers. If False: show all values for categorical groupers.

dropnabool, default True

If True, and if group keys contain NA values, NA values together with row/column will be dropped. If False, NA values will also be treated as the key in groups

New in version 1.1.0.

DataFrameGroupBy

Returns a groupby object that contains information about the groups.

resampleConvenience method for frequency conversion and resampling

of time series.

See the user guide for more.

>>> df = pd.DataFrame({'Animal': ['Falcon', 'Falcon',
...                               'Parrot', 'Parrot'],
...                    'Max Speed': [380., 370., 24., 26.]})
>>> df
   Animal  Max Speed
0  Falcon      380.0
1  Falcon      370.0
2  Parrot       24.0
3  Parrot       26.0
>>> df.groupby(['Animal']).mean()
        Max Speed
Animal
Falcon      375.0
Parrot       25.0

Hierarchical Indexes

We can groupby different levels of a hierarchical index using the level parameter:

>>> arrays = [['Falcon', 'Falcon', 'Parrot', 'Parrot'],
...           ['Captive', 'Wild', 'Captive', 'Wild']]
>>> index = pd.MultiIndex.from_arrays(arrays, names=('Animal', 'Type'))
>>> df = pd.DataFrame({'Max Speed': [390., 350., 30., 20.]},
...                   index=index)
>>> df
                Max Speed
Animal Type
Falcon Captive      390.0
       Wild         350.0
Parrot Captive       30.0
       Wild          20.0
>>> df.groupby(level=0).mean()
        Max Speed
Animal
Falcon      370.0
Parrot       25.0
>>> df.groupby(level="Type").mean()
         Max Speed
Type
Captive      210.0
Wild         185.0

We can also choose to include NA in group keys or not by setting dropna parameter, the default setting is True:

>>> l = [[1, 2, 3], [1, None, 4], [2, 1, 3], [1, 2, 2]]
>>> df = pd.DataFrame(l, columns=["a", "b", "c"])
>>> df.groupby(by=["b"]).sum()
    a   c
b
1.0 2   3
2.0 2   5
>>> df.groupby(by=["b"], dropna=False).sum()
    a   c
b
1.0 2   3
2.0 2   5
NaN 1   4
>>> l = [["a", 12, 12], [None, 12.3, 33.], ["b", 12.3, 123], ["a", 1, 1]]
>>> df = pd.DataFrame(l, columns=["a", "b", "c"])
>>> df.groupby(by="a").sum()
    b     c
a
a   13.0   13.0
b   12.3  123.0
>>> df.groupby(by="a", dropna=False).sum()
    b     c
a
a   13.0   13.0
b   12.3  123.0
NaN 12.3   33.0
pivot(index=None, columns=None, values=None)pandas.core.frame.DataFrame[source]

Return reshaped DataFrame organized by given index / column values.

Reshape data (produce a “pivot” table) based on column values. Uses unique values from specified index / columns to form axes of the resulting DataFrame. This function does not support data aggregation, multiple values will result in a MultiIndex in the columns. See the User Guide for more on reshaping.

indexstr or object or a list of str, optional

Column to use to make new frame’s index. If None, uses existing index.

Changed in version 1.1.0: Also accept list of index names.

columnsstr or object or a list of str

Column to use to make new frame’s columns.

Changed in version 1.1.0: Also accept list of columns names.

valuesstr, object or a list of the previous, optional

Column(s) to use for populating new frame’s values. If not specified, all remaining columns will be used and the result will have hierarchically indexed columns.

DataFrame

Returns reshaped DataFrame.

ValueError:

When there are any index, columns combinations with multiple values. DataFrame.pivot_table when you need to aggregate.

DataFrame.pivot_tableGeneralization of pivot that can handle

duplicate values for one index/column pair.

DataFrame.unstackPivot based on the index values instead of a

column.

wide_to_longWide panel to long format. Less flexible but more

user-friendly than melt.

For finer-tuned control, see hierarchical indexing documentation along with the related stack/unstack methods.

>>> df = pd.DataFrame({'foo': ['one', 'one', 'one', 'two', 'two',
...                            'two'],
...                    'bar': ['A', 'B', 'C', 'A', 'B', 'C'],
...                    'baz': [1, 2, 3, 4, 5, 6],
...                    'zoo': ['x', 'y', 'z', 'q', 'w', 't']})
>>> df
    foo   bar  baz  zoo
0   one   A    1    x
1   one   B    2    y
2   one   C    3    z
3   two   A    4    q
4   two   B    5    w
5   two   C    6    t
>>> df.pivot(index='foo', columns='bar', values='baz')
bar  A   B   C
foo
one  1   2   3
two  4   5   6
>>> df.pivot(index='foo', columns='bar')['baz']
bar  A   B   C
foo
one  1   2   3
two  4   5   6
>>> df.pivot(index='foo', columns='bar', values=['baz', 'zoo'])
      baz       zoo
bar   A  B  C   A  B  C
foo
one   1  2  3   x  y  z
two   4  5  6   q  w  t

You could also assign a list of column names or a list of index names.

>>> df = pd.DataFrame({
...        "lev1": [1, 1, 1, 2, 2, 2],
...        "lev2": [1, 1, 2, 1, 1, 2],
...        "lev3": [1, 2, 1, 2, 1, 2],
...        "lev4": [1, 2, 3, 4, 5, 6],
...        "values": [0, 1, 2, 3, 4, 5]})
>>> df
    lev1 lev2 lev3 lev4 values
0   1    1    1    1    0
1   1    1    2    2    1
2   1    2    1    3    2
3   2    1    2    4    3
4   2    1    1    5    4
5   2    2    2    6    5
>>> df.pivot(index="lev1", columns=["lev2", "lev3"],values="values")
lev2    1         2
lev3    1    2    1    2
lev1
1     0.0  1.0  2.0  NaN
2     4.0  3.0  NaN  5.0
>>> df.pivot(index=["lev1", "lev2"], columns=["lev3"],values="values")
      lev3    1    2
lev1  lev2
   1     1  0.0  1.0
         2  2.0  NaN
   2     1  4.0  3.0
         2  NaN  5.0

A ValueError is raised if there are any duplicates.

>>> df = pd.DataFrame({"foo": ['one', 'one', 'two', 'two'],
...                    "bar": ['A', 'A', 'B', 'C'],
...                    "baz": [1, 2, 3, 4]})
>>> df
   foo bar  baz
0  one   A    1
1  one   A    2
2  two   B    3
3  two   C    4

Notice that the first two rows are the same for our index and columns arguments.

>>> df.pivot(index='foo', columns='bar', values='baz')
Traceback (most recent call last):
   ...
ValueError: Index contains duplicate entries, cannot reshape
pivot_table(values=None, index=None, columns=None, aggfunc='mean', fill_value=None, margins=False, dropna=True, margins_name='All', observed=False)pandas.core.frame.DataFrame[source]

Create a spreadsheet-style pivot table as a DataFrame.

The levels in the pivot table will be stored in MultiIndex objects (hierarchical indexes) on the index and columns of the result DataFrame.

values : column to aggregate, optional index : column, Grouper, array, or list of the previous

If an array is passed, it must be the same length as the data. The list can contain any of the other types (except list). Keys to group by on the pivot table index. If an array is passed, it is being used as the same manner as column values.

columnscolumn, Grouper, array, or list of the previous

If an array is passed, it must be the same length as the data. The list can contain any of the other types (except list). Keys to group by on the pivot table column. If an array is passed, it is being used as the same manner as column values.

aggfuncfunction, list of functions, dict, default numpy.mean

If list of functions passed, the resulting pivot table will have hierarchical columns whose top level are the function names (inferred from the function objects themselves) If dict is passed, the key is column to aggregate and value is function or list of functions.

fill_valuescalar, default None

Value to replace missing values with (in the resulting pivot table, after aggregation).

marginsbool, default False

Add all row / columns (e.g. for subtotal / grand totals).

dropnabool, default True

Do not include columns whose entries are all NaN.

margins_namestr, default ‘All’

Name of the row / column that will contain the totals when margins is True.

observedbool, default False

This only applies if any of the groupers are Categoricals. If True: only show observed values for categorical groupers. If False: show all values for categorical groupers.

Changed in version 0.25.0.

DataFrame

An Excel style pivot table.

DataFrame.pivotPivot without aggregation that can handle

non-numeric data.

DataFrame.melt: Unpivot a DataFrame from wide to long format,

optionally leaving identifiers set.

wide_to_longWide panel to long format. Less flexible but more

user-friendly than melt.

>>> df = pd.DataFrame({"A": ["foo", "foo", "foo", "foo", "foo",
...                          "bar", "bar", "bar", "bar"],
...                    "B": ["one", "one", "one", "two", "two",
...                          "one", "one", "two", "two"],
...                    "C": ["small", "large", "large", "small",
...                          "small", "large", "small", "small",
...                          "large"],
...                    "D": [1, 2, 2, 3, 3, 4, 5, 6, 7],
...                    "E": [2, 4, 5, 5, 6, 6, 8, 9, 9]})
>>> df
     A    B      C  D  E
0  foo  one  small  1  2
1  foo  one  large  2  4
2  foo  one  large  2  5
3  foo  two  small  3  5
4  foo  two  small  3  6
5  bar  one  large  4  6
6  bar  one  small  5  8
7  bar  two  small  6  9
8  bar  two  large  7  9

This first example aggregates values by taking the sum.

>>> table = pd.pivot_table(df, values='D', index=['A', 'B'],
...                     columns=['C'], aggfunc=np.sum)
>>> table
C        large  small
A   B
bar one    4.0    5.0
    two    7.0    6.0
foo one    4.0    1.0
    two    NaN    6.0

We can also fill missing values using the fill_value parameter.

>>> table = pd.pivot_table(df, values='D', index=['A', 'B'],
...                     columns=['C'], aggfunc=np.sum, fill_value=0)
>>> table
C        large  small
A   B
bar one      4      5
    two      7      6
foo one      4      1
    two      0      6

The next example aggregates by taking the mean across multiple columns.

>>> table = pd.pivot_table(df, values=['D', 'E'], index=['A', 'C'],
...                     aggfunc={'D': np.mean,
...                              'E': np.mean})
>>> table
                D         E
A   C
bar large  5.500000  7.500000
    small  5.500000  8.500000
foo large  2.000000  4.500000
    small  2.333333  4.333333

We can also calculate multiple types of aggregations for any given value column.

>>> table = pd.pivot_table(df, values=['D', 'E'], index=['A', 'C'],
...                     aggfunc={'D': np.mean,
...                              'E': [min, max, np.mean]})
>>> table
                D    E
            mean  max      mean  min
A   C
bar large  5.500000  9.0  7.500000  6.0
    small  5.500000  9.0  8.500000  8.0
foo large  2.000000  5.0  4.500000  4.0
    small  2.333333  6.0  4.333333  2.0
stack(level=- 1, dropna=True)[source]

Stack the prescribed level(s) from columns to index.

Return a reshaped DataFrame or Series having a multi-level index with one or more new inner-most levels compared to the current DataFrame. The new inner-most levels are created by pivoting the columns of the current dataframe:

  • if the columns have a single level, the output is a Series;

  • if the columns have multiple levels, the new index level(s) is (are) taken from the prescribed level(s) and the output is a DataFrame.

levelint, str, list, default -1

Level(s) to stack from the column axis onto the index axis, defined as one index or label, or a list of indices or labels.

dropnabool, default True

Whether to drop rows in the resulting Frame/Series with missing values. Stacking a column level onto the index axis can create combinations of index and column values that are missing from the original dataframe. See Examples section.

DataFrame or Series

Stacked dataframe or series.

DataFrame.unstackUnstack prescribed level(s) from index axis

onto column axis.

DataFrame.pivotReshape dataframe from long format to wide

format.

DataFrame.pivot_tableCreate a spreadsheet-style pivot table

as a DataFrame.

The function is named by analogy with a collection of books being reorganized from being side by side on a horizontal position (the columns of the dataframe) to being stacked vertically on top of each other (in the index of the dataframe).

Single level columns

>>> df_single_level_cols = pd.DataFrame([[0, 1], [2, 3]],
...                                     index=['cat', 'dog'],
...                                     columns=['weight', 'height'])

Stacking a dataframe with a single level column axis returns a Series:

>>> df_single_level_cols
     weight height
cat       0      1
dog       2      3
>>> df_single_level_cols.stack()
cat  weight    0
     height    1
dog  weight    2
     height    3
dtype: int64

Multi level columns: simple case

>>> multicol1 = pd.MultiIndex.from_tuples([('weight', 'kg'),
...                                        ('weight', 'pounds')])
>>> df_multi_level_cols1 = pd.DataFrame([[1, 2], [2, 4]],
...                                     index=['cat', 'dog'],
...                                     columns=multicol1)

Stacking a dataframe with a multi-level column axis:

>>> df_multi_level_cols1
     weight
         kg    pounds
cat       1        2
dog       2        4
>>> df_multi_level_cols1.stack()
            weight
cat kg           1
    pounds       2
dog kg           2
    pounds       4

Missing values

>>> multicol2 = pd.MultiIndex.from_tuples([('weight', 'kg'),
...                                        ('height', 'm')])
>>> df_multi_level_cols2 = pd.DataFrame([[1.0, 2.0], [3.0, 4.0]],
...                                     index=['cat', 'dog'],
...                                     columns=multicol2)

It is common to have missing values when stacking a dataframe with multi-level columns, as the stacked dataframe typically has more values than the original dataframe. Missing values are filled with NaNs:

>>> df_multi_level_cols2
    weight height
        kg      m
cat    1.0    2.0
dog    3.0    4.0
>>> df_multi_level_cols2.stack()
        height  weight
cat kg     NaN     1.0
    m      2.0     NaN
dog kg     NaN     3.0
    m      4.0     NaN

Prescribing the level(s) to be stacked

The first parameter controls which level or levels are stacked:

>>> df_multi_level_cols2.stack(0)
             kg    m
cat height  NaN  2.0
    weight  1.0  NaN
dog height  NaN  4.0
    weight  3.0  NaN
>>> df_multi_level_cols2.stack([0, 1])
cat  height  m     2.0
     weight  kg    1.0
dog  height  m     4.0
     weight  kg    3.0
dtype: float64

Dropping missing values

>>> df_multi_level_cols3 = pd.DataFrame([[None, 1.0], [2.0, 3.0]],
...                                     index=['cat', 'dog'],
...                                     columns=multicol2)

Note that rows where all values are missing are dropped by default but this behaviour can be controlled via the dropna keyword parameter:

>>> df_multi_level_cols3
    weight height
        kg      m
cat    NaN    1.0
dog    2.0    3.0
>>> df_multi_level_cols3.stack(dropna=False)
        height  weight
cat kg     NaN     NaN
    m      1.0     NaN
dog kg     NaN     2.0
    m      3.0     NaN
>>> df_multi_level_cols3.stack(dropna=True)
        height  weight
cat m      1.0     NaN
dog kg     NaN     2.0
    m      3.0     NaN
explode(column: Union[str, Tuple], ignore_index: bool = False)pandas.core.frame.DataFrame[source]

Transform each element of a list-like to a row, replicating index values.

New in version 0.25.0.

columnstr or tuple

Column to explode.

ignore_indexbool, default False

If True, the resulting index will be labeled 0, 1, …, n - 1.

New in version 1.1.0.

DataFrame

Exploded lists to rows of the subset columns; index will be duplicated for these rows.

ValueError :

if columns of the frame are not unique.

DataFrame.unstackPivot a level of the (necessarily hierarchical)

index labels.

DataFrame.melt : Unpivot a DataFrame from wide format to long format. Series.explode : Explode a DataFrame from list-like columns to long format.

This routine will explode list-likes including lists, tuples, sets, Series, and np.ndarray. The result dtype of the subset rows will be object. Scalars will be returned unchanged, and empty list-likes will result in a np.nan for that row. In addition, the ordering of rows in the output will be non-deterministic when exploding sets.

>>> df = pd.DataFrame({'A': [[1, 2, 3], 'foo', [], [3, 4]], 'B': 1})
>>> df
           A  B
0  [1, 2, 3]  1
1        foo  1
2         []  1
3     [3, 4]  1
>>> df.explode('A')
     A  B
0    1  1
0    2  1
0    3  1
1  foo  1
2  NaN  1
3    3  1
3    4  1
unstack(level=- 1, fill_value=None)[source]

Pivot a level of the (necessarily hierarchical) index labels.

Returns a DataFrame having a new level of column labels whose inner-most level consists of the pivoted index labels.

If the index is not a MultiIndex, the output will be a Series (the analogue of stack when the columns are not a MultiIndex).

levelint, str, or list of these, default -1 (last level)

Level(s) of index to unstack, can pass level name.

fill_valueint, str or dict

Replace NaN with this value if the unstack produces missing values.

Series or DataFrame

DataFrame.pivot : Pivot a table based on column values. DataFrame.stack : Pivot a level of the column labels (inverse operation

from unstack).

>>> index = pd.MultiIndex.from_tuples([('one', 'a'), ('one', 'b'),
...                                    ('two', 'a'), ('two', 'b')])
>>> s = pd.Series(np.arange(1.0, 5.0), index=index)
>>> s
one  a   1.0
     b   2.0
two  a   3.0
     b   4.0
dtype: float64
>>> s.unstack(level=-1)
     a   b
one  1.0  2.0
two  3.0  4.0
>>> s.unstack(level=0)
   one  two
a  1.0   3.0
b  2.0   4.0
>>> df = s.unstack(level=0)
>>> df.unstack()
one  a  1.0
     b  2.0
two  a  3.0
     b  4.0
dtype: float64
melt(id_vars=None, value_vars=None, var_name=None, value_name='value', col_level=None, ignore_index=True)pandas.core.frame.DataFrame[source]

Unpivot a DataFrame from wide to long format, optionally leaving identifiers set.

This function is useful to massage a DataFrame into a format where one or more columns are identifier variables (id_vars), while all other columns, considered measured variables (value_vars), are “unpivoted” to the row axis, leaving just two non-identifier columns, ‘variable’ and ‘value’.

id_varstuple, list, or ndarray, optional

Column(s) to use as identifier variables.

value_varstuple, list, or ndarray, optional

Column(s) to unpivot. If not specified, uses all columns that are not set as id_vars.

var_namescalar

Name to use for the ‘variable’ column. If None it uses frame.columns.name or ‘variable’.

value_namescalar, default ‘value’

Name to use for the ‘value’ column.

col_levelint or str, optional

If columns are a MultiIndex then use this level to melt.

ignore_indexbool, default True

If True, original index is ignored. If False, the original index is retained. Index labels will be repeated as necessary.

New in version 1.1.0.

DataFrame

Unpivoted DataFrame.

melt : Identical method. pivot_table : Create a spreadsheet-style pivot table as a DataFrame. DataFrame.pivot : Return reshaped DataFrame organized

by given index / column values.

DataFrame.explodeExplode a DataFrame from list-like

columns to long format.

>>> df = pd.DataFrame({'A': {0: 'a', 1: 'b', 2: 'c'},
...                    'B': {0: 1, 1: 3, 2: 5},
...                    'C': {0: 2, 1: 4, 2: 6}})
>>> df
   A  B  C
0  a  1  2
1  b  3  4
2  c  5  6
>>> df.melt(id_vars=['A'], value_vars=['B'])
   A variable  value
0  a        B      1
1  b        B      3
2  c        B      5
>>> df.melt(id_vars=['A'], value_vars=['B', 'C'])
   A variable  value
0  a        B      1
1  b        B      3
2  c        B      5
3  a        C      2
4  b        C      4
5  c        C      6

The names of ‘variable’ and ‘value’ columns can be customized:

>>> df.melt(id_vars=['A'], value_vars=['B'],
...         var_name='myVarname', value_name='myValname')
   A myVarname  myValname
0  a         B          1
1  b         B          3
2  c         B          5

Original index values can be kept around:

>>> df.melt(id_vars=['A'], value_vars=['B', 'C'], ignore_index=False)
   A variable  value
0  a        B      1
1  b        B      3
2  c        B      5
0  a        C      2
1  b        C      4
2  c        C      6

If you have multi-index columns:

>>> df.columns = [list('ABC'), list('DEF')]
>>> df
   A  B  C
   D  E  F
0  a  1  2
1  b  3  4
2  c  5  6
>>> df.melt(col_level=0, id_vars=['A'], value_vars=['B'])
   A variable  value
0  a        B      1
1  b        B      3
2  c        B      5
>>> df.melt(id_vars=[('A', 'D')], value_vars=[('B', 'E')])
  (A, D) variable_0 variable_1  value
0      a          B          E      1
1      b          B          E      3
2      c          B          E      5
diff(periods: int = 1, axis: Union[str, int] = 0)pandas.core.frame.DataFrame[source]

First discrete difference of element.

Calculates the difference of a Dataframe element compared with another element in the Dataframe (default is element in previous row).

periodsint, default 1

Periods to shift for calculating difference, accepts negative values.

axis{0 or ‘index’, 1 or ‘columns’}, default 0

Take difference over rows (0) or columns (1).

Dataframe

First differences of the Series.

Dataframe.pct_change: Percent change over given number of periods. Dataframe.shift: Shift index by desired number of periods with an

optional time freq.

Series.diff: First discrete difference of object.

For boolean dtypes, this uses operator.xor() rather than operator.sub(). The result is calculated according to current dtype in Dataframe, however dtype of the result is always float64.

Difference with previous row

>>> df = pd.DataFrame({'a': [1, 2, 3, 4, 5, 6],
...                    'b': [1, 1, 2, 3, 5, 8],
...                    'c': [1, 4, 9, 16, 25, 36]})
>>> df
   a  b   c
0  1  1   1
1  2  1   4
2  3  2   9
3  4  3  16
4  5  5  25
5  6  8  36
>>> df.diff()
     a    b     c
0  NaN  NaN   NaN
1  1.0  0.0   3.0
2  1.0  1.0   5.0
3  1.0  1.0   7.0
4  1.0  2.0   9.0
5  1.0  3.0  11.0

Difference with previous column

>>> df.diff(axis=1)
    a  b   c
0 NaN  0   0
1 NaN -1   3
2 NaN -1   7
3 NaN -1  13
4 NaN  0  20
5 NaN  2  28

Difference with 3rd previous row

>>> df.diff(periods=3)
     a    b     c
0  NaN  NaN   NaN
1  NaN  NaN   NaN
2  NaN  NaN   NaN
3  3.0  2.0  15.0
4  3.0  4.0  21.0
5  3.0  6.0  27.0

Difference with following row

>>> df.diff(periods=-1)
     a    b     c
0 -1.0  0.0  -3.0
1 -1.0 -1.0  -5.0
2 -1.0 -1.0  -7.0
3 -1.0 -2.0  -9.0
4 -1.0 -3.0 -11.0
5  NaN  NaN   NaN

Overflow in input dtype

>>> df = pd.DataFrame({'a': [1, 0]}, dtype=np.uint8)
>>> df.diff()
       a
0    NaN
1  255.0
aggregate(func=None, axis=0, *args, **kwargs)[source]

Aggregate using one or more operations over the specified axis.

funcfunction, str, list or dict

Function to use for aggregating the data. If a function, must either work when passed a DataFrame or when passed to DataFrame.apply.

Accepted combinations are:

  • function

  • string function name

  • list of functions and/or function names, e.g. [np.sum, 'mean']

  • dict of axis labels -> functions, function names or list of such.

axis{0 or ‘index’, 1 or ‘columns’}, default 0

If 0 or ‘index’: apply function to each column. If 1 or ‘columns’: apply function to each row.

*args

Positional arguments to pass to func.

**kwargs

Keyword arguments to pass to func.

scalar, Series or DataFrame

The return can be:

  • scalar : when Series.agg is called with single function

  • Series : when DataFrame.agg is called with a single function

  • DataFrame : when DataFrame.agg is called with several functions

Return scalar, Series or DataFrame.

The aggregation operations are always performed over an axis, either the index (default) or the column axis. This behavior is different from numpy aggregation functions (mean, median, prod, sum, std, var), where the default is to compute the aggregation of the flattened array, e.g., numpy.mean(arr_2d) as opposed to numpy.mean(arr_2d, axis=0).

agg is an alias for aggregate. Use the alias.

DataFrame.apply : Perform any type of operations. DataFrame.transform : Perform transformation type operations. core.groupby.GroupBy : Perform operations over groups. core.resample.Resampler : Perform operations over resampled bins. core.window.Rolling : Perform operations over rolling window. core.window.Expanding : Perform operations over expanding window. core.window.ExponentialMovingWindow : Perform operation over exponential weighted

window.

agg is an alias for aggregate. Use the alias.

A passed user-defined-function will be passed a Series for evaluation.

>>> df = pd.DataFrame([[1, 2, 3],
...                    [4, 5, 6],
...                    [7, 8, 9],
...                    [np.nan, np.nan, np.nan]],
...                   columns=['A', 'B', 'C'])

Aggregate these functions over the rows.

>>> df.agg(['sum', 'min'])
        A     B     C
sum  12.0  15.0  18.0
min   1.0   2.0   3.0

Different aggregations per column.

>>> df.agg({'A' : ['sum', 'min'], 'B' : ['min', 'max']})
        A    B
sum  12.0  NaN
min   1.0  2.0
max   NaN  8.0

Aggregate different functions over the columns and rename the index of the resulting DataFrame.

>>> df.agg(x=('A', max), y=('B', 'min'), z=('C', np.mean))
     A    B    C
x  7.0  NaN  NaN
y  NaN  2.0  NaN
z  NaN  NaN  6.0

Aggregate over the columns.

>>> df.agg("mean", axis="columns")
0    2.0
1    5.0
2    8.0
3    NaN
dtype: float64
agg(func=None, axis=0, *args, **kwargs)

Aggregate using one or more operations over the specified axis.

funcfunction, str, list or dict

Function to use for aggregating the data. If a function, must either work when passed a DataFrame or when passed to DataFrame.apply.

Accepted combinations are:

  • function

  • string function name

  • list of functions and/or function names, e.g. [np.sum, 'mean']

  • dict of axis labels -> functions, function names or list of such.

axis{0 or ‘index’, 1 or ‘columns’}, default 0

If 0 or ‘index’: apply function to each column. If 1 or ‘columns’: apply function to each row.

*args

Positional arguments to pass to func.

**kwargs

Keyword arguments to pass to func.

scalar, Series or DataFrame

The return can be:

  • scalar : when Series.agg is called with single function

  • Series : when DataFrame.agg is called with a single function

  • DataFrame : when DataFrame.agg is called with several functions

Return scalar, Series or DataFrame.

The aggregation operations are always performed over an axis, either the index (default) or the column axis. This behavior is different from numpy aggregation functions (mean, median, prod, sum, std, var), where the default is to compute the aggregation of the flattened array, e.g., numpy.mean(arr_2d) as opposed to numpy.mean(arr_2d, axis=0).

agg is an alias for aggregate. Use the alias.

DataFrame.apply : Perform any type of operations. DataFrame.transform : Perform transformation type operations. core.groupby.GroupBy : Perform operations over groups. core.resample.Resampler : Perform operations over resampled bins. core.window.Rolling : Perform operations over rolling window. core.window.Expanding : Perform operations over expanding window. core.window.ExponentialMovingWindow : Perform operation over exponential weighted

window.

agg is an alias for aggregate. Use the alias.

A passed user-defined-function will be passed a Series for evaluation.

>>> df = pd.DataFrame([[1, 2, 3],
...                    [4, 5, 6],
...                    [7, 8, 9],
...                    [np.nan, np.nan, np.nan]],
...                   columns=['A', 'B', 'C'])

Aggregate these functions over the rows.

>>> df.agg(['sum', 'min'])
        A     B     C
sum  12.0  15.0  18.0
min   1.0   2.0   3.0

Different aggregations per column.

>>> df.agg({'A' : ['sum', 'min'], 'B' : ['min', 'max']})
        A    B
sum  12.0  NaN
min   1.0  2.0
max   NaN  8.0

Aggregate different functions over the columns and rename the index of the resulting DataFrame.

>>> df.agg(x=('A', max), y=('B', 'min'), z=('C', np.mean))
     A    B    C
x  7.0  NaN  NaN
y  NaN  2.0  NaN
z  NaN  NaN  6.0

Aggregate over the columns.

>>> df.agg("mean", axis="columns")
0    2.0
1    5.0
2    8.0
3    NaN
dtype: float64
transform(func: Union[Callable, str, List[Union[Callable, str]], Dict[Optional[Hashable], Union[Callable, str, List[Union[Callable, str]]]]], axis: Union[str, int] = 0, *args, **kwargs)pandas.core.frame.DataFrame[source]

Call func on self producing a DataFrame with transformed values.

Produced DataFrame will have same axis length as self.

funcfunction, str, list-like or dict-like

Function to use for transforming the data. If a function, must either work when passed a DataFrame or when passed to DataFrame.apply. If func is both list-like and dict-like, dict-like behavior takes precedence.

Accepted combinations are:

  • function

  • string function name

  • list-like of functions and/or function names, e.g. [np.exp, 'sqrt']

  • dict-like of axis labels -> functions, function names or list-like of such.

axis{0 or ‘index’, 1 or ‘columns’}, default 0

If 0 or ‘index’: apply function to each column. If 1 or ‘columns’: apply function to each row.

*args

Positional arguments to pass to func.

**kwargs

Keyword arguments to pass to func.

DataFrame

A DataFrame that must have the same length as self.

ValueError : If the returned DataFrame has a different length than self.

DataFrame.agg : Only perform aggregating type operations. DataFrame.apply : Invoke function on a DataFrame.

>>> df = pd.DataFrame({'A': range(3), 'B': range(1, 4)})
>>> df
   A  B
0  0  1
1  1  2
2  2  3
>>> df.transform(lambda x: x + 1)
   A  B
0  1  2
1  2  3
2  3  4

Even though the resulting DataFrame must have the same length as the input DataFrame, it is possible to provide several input functions:

>>> s = pd.Series(range(3))
>>> s
0    0
1    1
2    2
dtype: int64
>>> s.transform([np.sqrt, np.exp])
       sqrt        exp
0  0.000000   1.000000
1  1.000000   2.718282
2  1.414214   7.389056

You can call transform on a GroupBy object:

>>> df = pd.DataFrame({
...     "Date": [
...         "2015-05-08", "2015-05-07", "2015-05-06", "2015-05-05",
...         "2015-05-08", "2015-05-07", "2015-05-06", "2015-05-05"],
...     "Data": [5, 8, 6, 1, 50, 100, 60, 120],
... })
>>> df
         Date  Data
0  2015-05-08     5
1  2015-05-07     8
2  2015-05-06     6
3  2015-05-05     1
4  2015-05-08    50
5  2015-05-07   100
6  2015-05-06    60
7  2015-05-05   120
>>> df.groupby('Date')['Data'].transform('sum')
0     55
1    108
2     66
3    121
4     55
5    108
6     66
7    121
Name: Data, dtype: int64
>>> df = pd.DataFrame({
...     "c": [1, 1, 1, 2, 2, 2, 2],
...     "type": ["m", "n", "o", "m", "m", "n", "n"]
... })
>>> df
   c type
0  1    m
1  1    n
2  1    o
3  2    m
4  2    m
5  2    n
6  2    n
>>> df['size'] = df.groupby('c')['type'].transform(len)
>>> df
   c type size
0  1    m    3
1  1    n    3
2  1    o    3
3  2    m    4
4  2    m    4
5  2    n    4
6  2    n    4
apply(func, axis=0, raw=False, result_type=None, args=(), **kwds)[source]

Apply a function along an axis of the DataFrame.

Objects passed to the function are Series objects whose index is either the DataFrame’s index (axis=0) or the DataFrame’s columns (axis=1). By default (result_type=None), the final return type is inferred from the return type of the applied function. Otherwise, it depends on the result_type argument.

funcfunction

Function to apply to each column or row.

axis{0 or ‘index’, 1 or ‘columns’}, default 0

Axis along which the function is applied:

  • 0 or ‘index’: apply function to each column.

  • 1 or ‘columns’: apply function to each row.

rawbool, default False

Determines if row or column is passed as a Series or ndarray object:

  • False : passes each row or column as a Series to the function.

  • True : the passed function will receive ndarray objects instead. If you are just applying a NumPy reduction function this will achieve much better performance.

result_type{‘expand’, ‘reduce’, ‘broadcast’, None}, default None

These only act when axis=1 (columns):

  • ‘expand’ : list-like results will be turned into columns.

  • ‘reduce’ : returns a Series if possible rather than expanding list-like results. This is the opposite of ‘expand’.

  • ‘broadcast’ : results will be broadcast to the original shape of the DataFrame, the original index and columns will be retained.

The default behaviour (None) depends on the return value of the applied function: list-like results will be returned as a Series of those. However if the apply function returns a Series these are expanded to columns.

argstuple

Positional arguments to pass to func in addition to the array/series.

**kwds

Additional keyword arguments to pass as keywords arguments to func.

Series or DataFrame

Result of applying func along the given axis of the DataFrame.

DataFrame.applymap: For elementwise operations. DataFrame.aggregate: Only perform aggregating type operations. DataFrame.transform: Only perform transforming type operations.

>>> df = pd.DataFrame([[4, 9]] * 3, columns=['A', 'B'])
>>> df
   A  B
0  4  9
1  4  9
2  4  9

Using a numpy universal function (in this case the same as np.sqrt(df)):

>>> df.apply(np.sqrt)
     A    B
0  2.0  3.0
1  2.0  3.0
2  2.0  3.0

Using a reducing function on either axis

>>> df.apply(np.sum, axis=0)
A    12
B    27
dtype: int64
>>> df.apply(np.sum, axis=1)
0    13
1    13
2    13
dtype: int64

Returning a list-like will result in a Series

>>> df.apply(lambda x: [1, 2], axis=1)
0    [1, 2]
1    [1, 2]
2    [1, 2]
dtype: object

Passing result_type='expand' will expand list-like results to columns of a Dataframe

>>> df.apply(lambda x: [1, 2], axis=1, result_type='expand')
   0  1
0  1  2
1  1  2
2  1  2

Returning a Series inside the function is similar to passing result_type='expand'. The resulting column names will be the Series index.

>>> df.apply(lambda x: pd.Series([1, 2], index=['foo', 'bar']), axis=1)
   foo  bar
0    1    2
1    1    2
2    1    2

Passing result_type='broadcast' will ensure the same shape result, whether list-like or scalar is returned by the function, and broadcast it along the axis. The resulting column names will be the originals.

>>> df.apply(lambda x: [1, 2], axis=1, result_type='broadcast')
   A  B
0  1  2
1  1  2
2  1  2
applymap(func, na_action: Optional[str] = None)pandas.core.frame.DataFrame[source]

Apply a function to a Dataframe elementwise.

This method applies a function that accepts and returns a scalar to every element of a DataFrame.

funccallable

Python function, returns a single value from a single value.

na_action{None, ‘ignore’}, default None

If ‘ignore’, propagate NaN values, without passing them to func.

New in version 1.2.

DataFrame

Transformed DataFrame.

DataFrame.apply : Apply a function along input axis of DataFrame.

>>> df = pd.DataFrame([[1, 2.12], [3.356, 4.567]])
>>> df
       0      1
0  1.000  2.120
1  3.356  4.567
>>> df.applymap(lambda x: len(str(x)))
   0  1
0  3  4
1  5  5

Like Series.map, NA values can be ignored:

>>> df_copy = df.copy()
>>> df_copy.iloc[0, 0] = pd.NA
>>> df_copy.applymap(lambda x: len(str(x)), na_action='ignore')
      0  1
0  <NA>  4
1     5  5

Note that a vectorized version of func often exists, which will be much faster. You could square each number elementwise.

>>> df.applymap(lambda x: x**2)
           0          1
0   1.000000   4.494400
1  11.262736  20.857489

But it’s better to avoid applymap in that case.

>>> df ** 2
           0          1
0   1.000000   4.494400
1  11.262736  20.857489
append(other, ignore_index=False, verify_integrity=False, sort=False)pandas.core.frame.DataFrame[source]

Append rows of other to the end of caller, returning a new object.

Columns in other that are not in the caller are added as new columns.

otherDataFrame or Series/dict-like object, or list of these

The data to append.

ignore_indexbool, default False

If True, the resulting axis will be labeled 0, 1, …, n - 1.

verify_integritybool, default False

If True, raise ValueError on creating index with duplicates.

sortbool, default False

Sort columns if the columns of self and other are not aligned.

Changed in version 1.0.0: Changed to not sort by default.

DataFrame

concat : General function to concatenate DataFrame or Series objects.

If a list of dict/series is passed and the keys are all contained in the DataFrame’s index, the order of the columns in the resulting DataFrame will be unchanged.

Iteratively appending rows to a DataFrame can be more computationally intensive than a single concatenate. A better solution is to append those rows to a list and then concatenate the list with the original DataFrame all at once.

>>> df = pd.DataFrame([[1, 2], [3, 4]], columns=list('AB'))
>>> df
   A  B
0  1  2
1  3  4
>>> df2 = pd.DataFrame([[5, 6], [7, 8]], columns=list('AB'))
>>> df.append(df2)
   A  B
0  1  2
1  3  4
0  5  6
1  7  8

With ignore_index set to True:

>>> df.append(df2, ignore_index=True)
   A  B
0  1  2
1  3  4
2  5  6
3  7  8

The following, while not recommended methods for generating DataFrames, show two ways to generate a DataFrame from multiple data sources.

Less efficient:

>>> df = pd.DataFrame(columns=['A'])
>>> for i in range(5):
...     df = df.append({'A': i}, ignore_index=True)
>>> df
   A
0  0
1  1
2  2
3  3
4  4

More efficient:

>>> pd.concat([pd.DataFrame([i], columns=['A']) for i in range(5)],
...           ignore_index=True)
   A
0  0
1  1
2  2
3  3
4  4
join(other, on=None, how='left', lsuffix='', rsuffix='', sort=False)pandas.core.frame.DataFrame[source]

Join columns of another DataFrame.

Join columns with other DataFrame either on index or on a key column. Efficiently join multiple DataFrame objects by index at once by passing a list.

otherDataFrame, Series, or list of DataFrame

Index should be similar to one of the columns in this one. If a Series is passed, its name attribute must be set, and that will be used as the column name in the resulting joined DataFrame.

onstr, list of str, or array-like, optional

Column or index level name(s) in the caller to join on the index in other, otherwise joins index-on-index. If multiple values given, the other DataFrame must have a MultiIndex. Can pass an array as the join key if it is not already contained in the calling DataFrame. Like an Excel VLOOKUP operation.

how{‘left’, ‘right’, ‘outer’, ‘inner’}, default ‘left’

How to handle the operation of the two objects.

  • left: use calling frame’s index (or column if on is specified)

  • right: use other’s index.

  • outer: form union of calling frame’s index (or column if on is specified) with other’s index, and sort it. lexicographically.

  • inner: form intersection of calling frame’s index (or column if on is specified) with other’s index, preserving the order of the calling’s one.

lsuffixstr, default ‘’

Suffix to use from left frame’s overlapping columns.

rsuffixstr, default ‘’

Suffix to use from right frame’s overlapping columns.

sortbool, default False

Order result DataFrame lexicographically by the join key. If False, the order of the join key depends on the join type (how keyword).

DataFrame

A dataframe containing columns from both the caller and other.

DataFrame.merge : For column(s)-on-column(s) operations.

Parameters on, lsuffix, and rsuffix are not supported when passing a list of DataFrame objects.

Support for specifying index levels as the on parameter was added in version 0.23.0.

>>> df = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3', 'K4', 'K5'],
...                    'A': ['A0', 'A1', 'A2', 'A3', 'A4', 'A5']})
>>> df
  key   A
0  K0  A0
1  K1  A1
2  K2  A2
3  K3  A3
4  K4  A4
5  K5  A5
>>> other = pd.DataFrame({'key': ['K0', 'K1', 'K2'],
...                       'B': ['B0', 'B1', 'B2']})
>>> other
  key   B
0  K0  B0
1  K1  B1
2  K2  B2

Join DataFrames using their indexes.

>>> df.join(other, lsuffix='_caller', rsuffix='_other')
  key_caller   A key_other    B
0         K0  A0        K0   B0
1         K1  A1        K1   B1
2         K2  A2        K2   B2
3         K3  A3       NaN  NaN
4         K4  A4       NaN  NaN
5         K5  A5       NaN  NaN

If we want to join using the key columns, we need to set key to be the index in both df and other. The joined DataFrame will have key as its index.

>>> df.set_index('key').join(other.set_index('key'))
      A    B
key
K0   A0   B0
K1   A1   B1
K2   A2   B2
K3   A3  NaN
K4   A4  NaN
K5   A5  NaN

Another option to join using the key columns is to use the on parameter. DataFrame.join always uses other’s index but we can use any column in df. This method preserves the original DataFrame’s index in the result.

>>> df.join(other.set_index('key'), on='key')
  key   A    B
0  K0  A0   B0
1  K1  A1   B1
2  K2  A2   B2
3  K3  A3  NaN
4  K4  A4  NaN
5  K5  A5  NaN
merge(right, how='inner', on=None, left_on=None, right_on=None, left_index=False, right_index=False, sort=False, suffixes=('_x', '_y'), copy=True, indicator=False, validate=None)pandas.core.frame.DataFrame[source]

Merge DataFrame or named Series objects with a database-style join.

The join is done on columns or indexes. If joining columns on columns, the DataFrame indexes will be ignored. Otherwise if joining indexes on indexes or indexes on a column or columns, the index will be passed on. When performing a cross merge, no column specifications to merge on are allowed.

rightDataFrame or named Series

Object to merge with.

how{‘left’, ‘right’, ‘outer’, ‘inner’, ‘cross’}, default ‘inner’

Type of merge to be performed.

  • left: use only keys from left frame, similar to a SQL left outer join; preserve key order.

  • right: use only keys from right frame, similar to a SQL right outer join; preserve key order.

  • outer: use union of keys from both frames, similar to a SQL full outer join; sort keys lexicographically.

  • inner: use intersection of keys from both frames, similar to a SQL inner join; preserve the order of the left keys.

  • cross: creates the cartesian product from both frames, preserves the order of the left keys.

    New in version 1.2.0.

onlabel or list

Column or index level names to join on. These must be found in both DataFrames. If on is None and not merging on indexes then this defaults to the intersection of the columns in both DataFrames.

left_onlabel or list, or array-like

Column or index level names to join on in the left DataFrame. Can also be an array or list of arrays of the length of the left DataFrame. These arrays are treated as if they are columns.

right_onlabel or list, or array-like

Column or index level names to join on in the right DataFrame. Can also be an array or list of arrays of the length of the right DataFrame. These arrays are treated as if they are columns.

left_indexbool, default False

Use the index from the left DataFrame as the join key(s). If it is a MultiIndex, the number of keys in the other DataFrame (either the index or a number of columns) must match the number of levels.

right_indexbool, default False

Use the index from the right DataFrame as the join key. Same caveats as left_index.

sortbool, default False

Sort the join keys lexicographically in the result DataFrame. If False, the order of the join keys depends on the join type (how keyword).

suffixeslist-like, default is (“_x”, “_y”)

A length-2 sequence where each element is optionally a string indicating the suffix to add to overlapping column names in left and right respectively. Pass a value of None instead of a string to indicate that the column name from left or right should be left as-is, with no suffix. At least one of the values must not be None.

copybool, default True

If False, avoid copy if possible.

indicatorbool or str, default False

If True, adds a column to the output DataFrame called “_merge” with information on the source of each row. The column can be given a different name by providing a string argument. The column will have a Categorical type with the value of “left_only” for observations whose merge key only appears in the left DataFrame, “right_only” for observations whose merge key only appears in the right DataFrame, and “both” if the observation’s merge key is found in both DataFrames.

validatestr, optional

If specified, checks if merge is of specified type.

  • “one_to_one” or “1:1”: check if merge keys are unique in both left and right datasets.

  • “one_to_many” or “1:m”: check if merge keys are unique in left dataset.

  • “many_to_one” or “m:1”: check if merge keys are unique in right dataset.

  • “many_to_many” or “m:m”: allowed, but does not result in checks.

DataFrame

A DataFrame of the two merged objects.

merge_ordered : Merge with optional filling/interpolation. merge_asof : Merge on nearest keys. DataFrame.join : Similar method using indices.

Support for specifying index levels as the on, left_on, and right_on parameters was added in version 0.23.0 Support for merging named Series objects was added in version 0.24.0

>>> df1 = pd.DataFrame({'lkey': ['foo', 'bar', 'baz', 'foo'],
...                     'value': [1, 2, 3, 5]})
>>> df2 = pd.DataFrame({'rkey': ['foo', 'bar', 'baz', 'foo'],
...                     'value': [5, 6, 7, 8]})
>>> df1
    lkey value
0   foo      1
1   bar      2
2   baz      3
3   foo      5
>>> df2
    rkey value
0   foo      5
1   bar      6
2   baz      7
3   foo      8

Merge df1 and df2 on the lkey and rkey columns. The value columns have the default suffixes, _x and _y, appended.

>>> df1.merge(df2, left_on='lkey', right_on='rkey')
  lkey  value_x rkey  value_y
0  foo        1  foo        5
1  foo        1  foo        8
2  foo        5  foo        5
3  foo        5  foo        8
4  bar        2  bar        6
5  baz        3  baz        7

Merge DataFrames df1 and df2 with specified left and right suffixes appended to any overlapping columns.

>>> df1.merge(df2, left_on='lkey', right_on='rkey',
...           suffixes=('_left', '_right'))
  lkey  value_left rkey  value_right
0  foo           1  foo            5
1  foo           1  foo            8
2  foo           5  foo            5
3  foo           5  foo            8
4  bar           2  bar            6
5  baz           3  baz            7

Merge DataFrames df1 and df2, but raise an exception if the DataFrames have any overlapping columns.

>>> df1.merge(df2, left_on='lkey', right_on='rkey', suffixes=(False, False))
Traceback (most recent call last):
...
ValueError: columns overlap but no suffix specified:
    Index(['value'], dtype='object')
>>> df1 = pd.DataFrame({'a': ['foo', 'bar'], 'b': [1, 2]})
>>> df2 = pd.DataFrame({'a': ['foo', 'baz'], 'c': [3, 4]})
>>> df1
      a  b
0   foo  1
1   bar  2
>>> df2
      a  c
0   foo  3
1   baz  4
>>> df1.merge(df2, how='inner', on='a')
      a  b  c
0   foo  1  3
>>> df1.merge(df2, how='left', on='a')
      a  b  c
0   foo  1  3.0
1   bar  2  NaN
>>> df1 = pd.DataFrame({'left': ['foo', 'bar']})
>>> df2 = pd.DataFrame({'right': [7, 8]})
>>> df1
    left
0   foo
1   bar
>>> df2
    right
0   7
1   8
>>> df1.merge(df2, how='cross')
   left  right
0   foo      7
1   foo      8
2   bar      7
3   bar      8
round(decimals=0, *args, **kwargs)pandas.core.frame.DataFrame[source]

Round a DataFrame to a variable number of decimal places.

decimalsint, dict, Series

Number of decimal places to round each column to. If an int is given, round each column to the same number of places. Otherwise dict and Series round to variable numbers of places. Column names should be in the keys if decimals is a dict-like, or in the index if decimals is a Series. Any columns not included in decimals will be left as is. Elements of decimals which are not columns of the input will be ignored.

*args

Additional keywords have no effect but might be accepted for compatibility with numpy.

**kwargs

Additional keywords have no effect but might be accepted for compatibility with numpy.

DataFrame

A DataFrame with the affected columns rounded to the specified number of decimal places.

numpy.around : Round a numpy array to the given number of decimals. Series.round : Round a Series to the given number of decimals.

>>> df = pd.DataFrame([(.21, .32), (.01, .67), (.66, .03), (.21, .18)],
...                   columns=['dogs', 'cats'])
>>> df
    dogs  cats
0  0.21  0.32
1  0.01  0.67
2  0.66  0.03
3  0.21  0.18

By providing an integer each column is rounded to the same number of decimal places

>>> df.round(1)
    dogs  cats
0   0.2   0.3
1   0.0   0.7
2   0.7   0.0
3   0.2   0.2

With a dict, the number of places for specific columns can be specified with the column names as key and the number of decimal places as value

>>> df.round({'dogs': 1, 'cats': 0})
    dogs  cats
0   0.2   0.0
1   0.0   1.0
2   0.7   0.0
3   0.2   0.0

Using a Series, the number of places for specific columns can be specified with the column names as index and the number of decimal places as value

>>> decimals = pd.Series([0, 1], index=['cats', 'dogs'])
>>> df.round(decimals)
    dogs  cats
0   0.2   0.0
1   0.0   1.0
2   0.7   0.0
3   0.2   0.0
corr(method='pearson', min_periods=1)pandas.core.frame.DataFrame[source]

Compute pairwise correlation of columns, excluding NA/null values.

method{‘pearson’, ‘kendall’, ‘spearman’} or callable

Method of correlation:

  • pearson : standard correlation coefficient

  • kendall : Kendall Tau correlation coefficient

  • spearman : Spearman rank correlation

  • callable: callable with input two 1d ndarrays

    and returning a float. Note that the returned matrix from corr will have 1 along the diagonals and will be symmetric regardless of the callable’s behavior.

    New in version 0.24.0.

min_periodsint, optional

Minimum number of observations required per pair of columns to have a valid result. Currently only available for Pearson and Spearman correlation.

DataFrame

Correlation matrix.

DataFrame.corrwithCompute pairwise correlation with another

DataFrame or Series.

Series.corr : Compute the correlation between two Series.

>>> def histogram_intersection(a, b):
...     v = np.minimum(a, b).sum().round(decimals=1)
...     return v
>>> df = pd.DataFrame([(.2, .3), (.0, .6), (.6, .0), (.2, .1)],
...                   columns=['dogs', 'cats'])
>>> df.corr(method=histogram_intersection)
      dogs  cats
dogs   1.0   0.3
cats   0.3   1.0
cov(min_periods: Optional[int] = None, ddof: Optional[int] = 1)pandas.core.frame.DataFrame[source]

Compute pairwise covariance of columns, excluding NA/null values.

Compute the pairwise covariance among the series of a DataFrame. The returned data frame is the covariance matrix of the columns of the DataFrame.

Both NA and null values are automatically excluded from the calculation. (See the note below about bias from missing values.) A threshold can be set for the minimum number of observations for each value created. Comparisons with observations below this threshold will be returned as NaN.

This method is generally used for the analysis of time series data to understand the relationship between different measures across time.

min_periodsint, optional

Minimum number of observations required per pair of columns to have a valid result.

ddofint, default 1

Delta degrees of freedom. The divisor used in calculations is N - ddof, where N represents the number of elements.

New in version 1.1.0.

DataFrame

The covariance matrix of the series of the DataFrame.

Series.cov : Compute covariance with another Series. core.window.ExponentialMovingWindow.cov: Exponential weighted sample covariance. core.window.Expanding.cov : Expanding sample covariance. core.window.Rolling.cov : Rolling sample covariance.

Returns the covariance matrix of the DataFrame’s time series. The covariance is normalized by N-ddof.

For DataFrames that have Series that are missing data (assuming that data is missing at random) the returned covariance matrix will be an unbiased estimate of the variance and covariance between the member Series.

However, for many applications this estimate may not be acceptable because the estimate covariance matrix is not guaranteed to be positive semi-definite. This could lead to estimate correlations having absolute values which are greater than one, and/or a non-invertible covariance matrix. See Estimation of covariance matrices for more details.

>>> df = pd.DataFrame([(1, 2), (0, 3), (2, 0), (1, 1)],
...                   columns=['dogs', 'cats'])
>>> df.cov()
          dogs      cats
dogs  0.666667 -1.000000
cats -1.000000  1.666667
>>> np.random.seed(42)
>>> df = pd.DataFrame(np.random.randn(1000, 5),
...                   columns=['a', 'b', 'c', 'd', 'e'])
>>> df.cov()
          a         b         c         d         e
a  0.998438 -0.020161  0.059277 -0.008943  0.014144
b -0.020161  1.059352 -0.008543 -0.024738  0.009826
c  0.059277 -0.008543  1.010670 -0.001486 -0.000271
d -0.008943 -0.024738 -0.001486  0.921297 -0.013692
e  0.014144  0.009826 -0.000271 -0.013692  0.977795

Minimum number of periods

This method also supports an optional min_periods keyword that specifies the required minimum number of non-NA observations for each column pair in order to have a valid result:

>>> np.random.seed(42)
>>> df = pd.DataFrame(np.random.randn(20, 3),
...                   columns=['a', 'b', 'c'])
>>> df.loc[df.index[:5], 'a'] = np.nan
>>> df.loc[df.index[5:10], 'b'] = np.nan
>>> df.cov(min_periods=12)
          a         b         c
a  0.316741       NaN -0.150812
b       NaN  1.248003  0.191417
c -0.150812  0.191417  0.895202
corrwith(other, axis=0, drop=False, method='pearson')pandas.core.series.Series[source]

Compute pairwise correlation.

Pairwise correlation is computed between rows or columns of DataFrame with rows or columns of Series or DataFrame. DataFrames are first aligned along both axes before computing the correlations.

otherDataFrame, Series

Object with which to compute correlations.

axis{0 or ‘index’, 1 or ‘columns’}, default 0

The axis to use. 0 or ‘index’ to compute column-wise, 1 or ‘columns’ for row-wise.

dropbool, default False

Drop missing indices from result.

method{‘pearson’, ‘kendall’, ‘spearman’} or callable

Method of correlation:

  • pearson : standard correlation coefficient

  • kendall : Kendall Tau correlation coefficient

  • spearman : Spearman rank correlation

  • callable: callable with input two 1d ndarrays

    and returning a float.

New in version 0.24.0.

Series

Pairwise correlations.

DataFrame.corr : Compute pairwise correlation of columns.

count(axis=0, level=None, numeric_only=False)[source]

Count non-NA cells for each column or row.

The values None, NaN, NaT, and optionally numpy.inf (depending on pandas.options.mode.use_inf_as_na) are considered NA.

axis{0 or ‘index’, 1 or ‘columns’}, default 0

If 0 or ‘index’ counts are generated for each column. If 1 or ‘columns’ counts are generated for each row.

levelint or str, optional

If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a DataFrame. A str specifies the level name.

numeric_onlybool, default False

Include only float, int or boolean data.

Series or DataFrame

For each column/row the number of non-NA/null entries. If level is specified returns a DataFrame.

Series.count: Number of non-NA elements in a Series. DataFrame.value_counts: Count unique combinations of columns. DataFrame.shape: Number of DataFrame rows and columns (including NA

elements).

DataFrame.isna: Boolean same-sized DataFrame showing places of NA

elements.

Constructing DataFrame from a dictionary:

>>> df = pd.DataFrame({"Person":
...                    ["John", "Myla", "Lewis", "John", "Myla"],
...                    "Age": [24., np.nan, 21., 33, 26],
...                    "Single": [False, True, True, True, False]})
>>> df
   Person   Age  Single
0    John  24.0   False
1    Myla   NaN    True
2   Lewis  21.0    True
3    John  33.0    True
4    Myla  26.0   False

Notice the uncounted NA values:

>>> df.count()
Person    5
Age       4
Single    5
dtype: int64

Counts for each row:

>>> df.count(axis='columns')
0    3
1    2
2    3
3    3
4    3
dtype: int64

Counts for one level of a MultiIndex:

>>> df.set_index(["Person", "Single"]).count(level="Person")
        Age
Person
John      2
Lewis     1
Myla      1
nunique(axis=0, dropna=True)pandas.core.series.Series[source]

Count distinct observations over requested axis.

Return Series with number of distinct observations. Can ignore NaN values.

axis{0 or ‘index’, 1 or ‘columns’}, default 0

The axis to use. 0 or ‘index’ for row-wise, 1 or ‘columns’ for column-wise.

dropnabool, default True

Don’t include NaN in the counts.

Series

Series.nunique: Method nunique for Series. DataFrame.count: Count non-NA cells for each column or row.

>>> df = pd.DataFrame({'A': [1, 2, 3], 'B': [1, 1, 1]})
>>> df.nunique()
A    3
B    1
dtype: int64
>>> df.nunique(axis=1)
0    1
1    2
2    2
dtype: int64
idxmin(axis=0, skipna=True)pandas.core.series.Series[source]

Return index of first occurrence of minimum over requested axis.

NA/null values are excluded.

axis{0 or ‘index’, 1 or ‘columns’}, default 0

The axis to use. 0 or ‘index’ for row-wise, 1 or ‘columns’ for column-wise.

skipnabool, default True

Exclude NA/null values. If an entire row/column is NA, the result will be NA.

Series

Indexes of minima along the specified axis.

ValueError
  • If the row/column is empty

Series.idxmin : Return index of the minimum element.

This method is the DataFrame version of ndarray.argmin.

Consider a dataset containing food consumption in Argentina.

>>> df = pd.DataFrame({'consumption': [10.51, 103.11, 55.48],
...                    'co2_emissions': [37.2, 19.66, 1712]},
...                    index=['Pork', 'Wheat Products', 'Beef'])
>>> df
                consumption  co2_emissions
Pork                  10.51         37.20
Wheat Products       103.11         19.66
Beef                  55.48       1712.00

By default, it returns the index for the minimum value in each column.

>>> df.idxmin()
consumption                Pork
co2_emissions    Wheat Products
dtype: object

To return the index for the minimum value in each row, use axis="columns".

>>> df.idxmin(axis="columns")
Pork                consumption
Wheat Products    co2_emissions
Beef                consumption
dtype: object
idxmax(axis=0, skipna=True)pandas.core.series.Series[source]

Return index of first occurrence of maximum over requested axis.

NA/null values are excluded.

axis{0 or ‘index’, 1 or ‘columns’}, default 0

The axis to use. 0 or ‘index’ for row-wise, 1 or ‘columns’ for column-wise.

skipnabool, default True

Exclude NA/null values. If an entire row/column is NA, the result will be NA.

Series

Indexes of maxima along the specified axis.

ValueError
  • If the row/column is empty

Series.idxmax : Return index of the maximum element.

This method is the DataFrame version of ndarray.argmax.

Consider a dataset containing food consumption in Argentina.

>>> df = pd.DataFrame({'consumption': [10.51, 103.11, 55.48],
...                    'co2_emissions': [37.2, 19.66, 1712]},
...                    index=['Pork', 'Wheat Products', 'Beef'])
>>> df
                consumption  co2_emissions
Pork                  10.51         37.20
Wheat Products       103.11         19.66
Beef                  55.48       1712.00

By default, it returns the index for the maximum value in each column.

>>> df.idxmax()
consumption     Wheat Products
co2_emissions             Beef
dtype: object

To return the index for the maximum value in each row, use axis="columns".

>>> df.idxmax(axis="columns")
Pork              co2_emissions
Wheat Products     consumption
Beef              co2_emissions
dtype: object
mode(axis=0, numeric_only=False, dropna=True)pandas.core.frame.DataFrame[source]

Get the mode(s) of each element along the selected axis.

The mode of a set of values is the value that appears most often. It can be multiple values.

axis{0 or ‘index’, 1 or ‘columns’}, default 0

The axis to iterate over while searching for the mode:

  • 0 or ‘index’ : get mode of each column

  • 1 or ‘columns’ : get mode of each row.

numeric_onlybool, default False

If True, only apply to numeric columns.

dropnabool, default True

Don’t consider counts of NaN/NaT.

New in version 0.24.0.

DataFrame

The modes of each column or row.

Series.mode : Return the highest frequency value in a Series. Series.value_counts : Return the counts of values in a Series.

>>> df = pd.DataFrame([('bird', 2, 2),
...                    ('mammal', 4, np.nan),
...                    ('arthropod', 8, 0),
...                    ('bird', 2, np.nan)],
...                   index=('falcon', 'horse', 'spider', 'ostrich'),
...                   columns=('species', 'legs', 'wings'))
>>> df
           species  legs  wings
falcon        bird     2    2.0
horse       mammal     4    NaN
spider   arthropod     8    0.0
ostrich       bird     2    NaN

By default, missing values are not considered, and the mode of wings are both 0 and 2. Because the resulting DataFrame has two rows, the second row of species and legs contains NaN.

>>> df.mode()
  species  legs  wings
0    bird   2.0    0.0
1     NaN   NaN    2.0

Setting dropna=False NaN values are considered and they can be the mode (like for wings).

>>> df.mode(dropna=False)
  species  legs  wings
0    bird     2    NaN

Setting numeric_only=True, only the mode of numeric columns is computed, and columns of other types are ignored.

>>> df.mode(numeric_only=True)
   legs  wings
0   2.0    0.0
1   NaN    2.0

To compute the mode over columns and not rows, use the axis parameter:

>>> df.mode(axis='columns', numeric_only=True)
           0    1
falcon   2.0  NaN
horse    4.0  NaN
spider   0.0  8.0
ostrich  2.0  NaN
quantile(q=0.5, axis=0, numeric_only=True, interpolation='linear')[source]

Return values at the given quantile over requested axis.

qfloat or array-like, default 0.5 (50% quantile)

Value between 0 <= q <= 1, the quantile(s) to compute.

axis{0, 1, ‘index’, ‘columns’}, default 0

Equals 0 or ‘index’ for row-wise, 1 or ‘columns’ for column-wise.

numeric_onlybool, default True

If False, the quantile of datetime and timedelta data will be computed as well.

interpolation{‘linear’, ‘lower’, ‘higher’, ‘midpoint’, ‘nearest’}

This optional parameter specifies the interpolation method to use, when the desired quantile lies between two data points i and j:

  • linear: i + (j - i) * fraction, where fraction is the fractional part of the index surrounded by i and j.

  • lower: i.

  • higher: j.

  • nearest: i or j whichever is nearest.

  • midpoint: (i + j) / 2.

Series or DataFrame

If q is an array, a DataFrame will be returned where the

index is q, the columns are the columns of self, and the values are the quantiles.

If q is a float, a Series will be returned where the

index is the columns of self and the values are the quantiles.

core.window.Rolling.quantile: Rolling quantile. numpy.percentile: Numpy function to compute the percentile.

>>> df = pd.DataFrame(np.array([[1, 1], [2, 10], [3, 100], [4, 100]]),
...                   columns=['a', 'b'])
>>> df.quantile(.1)
a    1.3
b    3.7
Name: 0.1, dtype: float64
>>> df.quantile([.1, .5])
       a     b
0.1  1.3   3.7
0.5  2.5  55.0

Specifying numeric_only=False will also compute the quantile of datetime and timedelta data.

>>> df = pd.DataFrame({'A': [1, 2],
...                    'B': [pd.Timestamp('2010'),
...                          pd.Timestamp('2011')],
...                    'C': [pd.Timedelta('1 days'),
...                          pd.Timedelta('2 days')]})
>>> df.quantile(0.5, numeric_only=False)
A                    1.5
B    2010-07-02 12:00:00
C        1 days 12:00:00
Name: 0.5, dtype: object
to_timestamp(freq=None, how: str = 'start', axis: Union[str, int] = 0, copy: bool = True)pandas.core.frame.DataFrame[source]

Cast to DatetimeIndex of timestamps, at beginning of period.

freqstr, default frequency of PeriodIndex

Desired frequency.

how{‘s’, ‘e’, ‘start’, ‘end’}

Convention for converting period to timestamp; start of period vs. end.

axis{0 or ‘index’, 1 or ‘columns’}, default 0

The axis to convert (the index by default).

copybool, default True

If False then underlying input data is not copied.

DataFrame with DatetimeIndex

to_period(freq=None, axis: Union[str, int] = 0, copy: bool = True)pandas.core.frame.DataFrame[source]

Convert DataFrame from DatetimeIndex to PeriodIndex.

Convert DataFrame from DatetimeIndex to PeriodIndex with desired frequency (inferred from index if not passed).

freqstr, default

Frequency of the PeriodIndex.

axis{0 or ‘index’, 1 or ‘columns’}, default 0

The axis to convert (the index by default).

copybool, default True

If False then underlying input data is not copied.

DataFrame with PeriodIndex

isin(values)pandas.core.frame.DataFrame[source]

Whether each element in the DataFrame is contained in values.

valuesiterable, Series, DataFrame or dict

The result will only be true at a location if all the labels match. If values is a Series, that’s the index. If values is a dict, the keys must be the column names, which must match. If values is a DataFrame, then both the index and column labels must match.

DataFrame

DataFrame of booleans showing whether each element in the DataFrame is contained in values.

DataFrame.eq: Equality test for DataFrame. Series.isin: Equivalent method on Series. Series.str.contains: Test if pattern or regex is contained within a

string of a Series or Index.

>>> df = pd.DataFrame({'num_legs': [2, 4], 'num_wings': [2, 0]},
...                   index=['falcon', 'dog'])
>>> df
        num_legs  num_wings
falcon         2          2
dog            4          0

When values is a list check whether every value in the DataFrame is present in the list (which animals have 0 or 2 legs or wings)

>>> df.isin([0, 2])
        num_legs  num_wings
falcon      True       True
dog        False       True

When values is a dict, we can pass values to check for each column separately:

>>> df.isin({'num_wings': [0, 3]})
        num_legs  num_wings
falcon     False      False
dog        False       True

When values is a Series or DataFrame the index and column must match. Note that ‘falcon’ does not match based on the number of legs in df2.

>>> other = pd.DataFrame({'num_legs': [8, 2], 'num_wings': [0, 2]},
...                      index=['spider', 'falcon'])
>>> df.isin(other)
        num_legs  num_wings
falcon      True       True
dog        False      False
index: Index

The index (row labels) of the DataFrame.

columns: Index

The column labels of the DataFrame.

plot

alias of pandas.plotting._core.PlotAccessor

hist(column: Union[Hashable, None, Sequence[Optional[Hashable]]] = None, by=None, grid: bool = True, xlabelsize: Optional[int] = None, xrot: Optional[float] = None, ylabelsize: Optional[int] = None, yrot: Optional[float] = None, ax=None, sharex: bool = False, sharey: bool = False, figsize: Optional[Tuple[int, int]] = None, layout: Optional[Tuple[int, int]] = None, bins: Union[int, Sequence[int]] = 10, backend: Optional[str] = None, legend: bool = False, **kwargs)

Make a histogram of the DataFrame’s.

A histogram is a representation of the distribution of data. This function calls matplotlib.pyplot.hist(), on each series in the DataFrame, resulting in one histogram per column.

dataDataFrame

The pandas object holding the data.

columnstr or sequence

If passed, will be used to limit data to a subset of columns.

byobject, optional

If passed, then used to form histograms for separate groups.

gridbool, default True

Whether to show axis grid lines.

xlabelsizeint, default None

If specified changes the x-axis label size.

xrotfloat, default None

Rotation of x axis labels. For example, a value of 90 displays the x labels rotated 90 degrees clockwise.

ylabelsizeint, default None

If specified changes the y-axis label size.

yrotfloat, default None

Rotation of y axis labels. For example, a value of 90 displays the y labels rotated 90 degrees clockwise.

axMatplotlib axes object, default None

The axes to plot the histogram on.

sharexbool, default True if ax is None else False

In case subplots=True, share x axis and set some x axis labels to invisible; defaults to True if ax is None otherwise False if an ax is passed in. Note that passing in both an ax and sharex=True will alter all x axis labels for all subplots in a figure.

shareybool, default False

In case subplots=True, share y axis and set some y axis labels to invisible.

figsizetuple

The size in inches of the figure to create. Uses the value in matplotlib.rcParams by default.

layouttuple, optional

Tuple of (rows, columns) for the layout of the histograms.

binsint or sequence, default 10

Number of histogram bins to be used. If an integer is given, bins + 1 bin edges are calculated and returned. If bins is a sequence, gives bin edges, including left edge of first bin and right edge of last bin. In this case, bins is returned unmodified.

backendstr, default None

Backend to use instead of the backend specified in the option plotting.backend. For instance, ‘matplotlib’. Alternatively, to specify the plotting.backend for the whole session, set pd.options.plotting.backend.

New in version 1.0.0.

legendbool, default False

Whether to show the legend.

New in version 1.1.0.

**kwargs

All other plotting keyword arguments to be passed to matplotlib.pyplot.hist().

matplotlib.AxesSubplot or numpy.ndarray of them

matplotlib.pyplot.hist : Plot a histogram using matplotlib.

This example draws a histogram based on the length and width of some animals, displayed in three bins

boxplot(column=None, by=None, ax=None, fontsize=None, rot=0, grid=True, figsize=None, layout=None, return_type=None, backend=None, **kwargs)

Make a box plot from DataFrame columns.

Make a box-and-whisker plot from DataFrame columns, optionally grouped by some other columns. A box plot is a method for graphically depicting groups of numerical data through their quartiles. The box extends from the Q1 to Q3 quartile values of the data, with a line at the median (Q2). The whiskers extend from the edges of box to show the range of the data. By default, they extend no more than 1.5 * IQR (IQR = Q3 - Q1) from the edges of the box, ending at the farthest data point within that interval. Outliers are plotted as separate dots.

For further details see Wikipedia’s entry for boxplot.

columnstr or list of str, optional

Column name or list of names, or vector. Can be any valid input to pandas.DataFrame.groupby().

bystr or array-like, optional

Column in the DataFrame to pandas.DataFrame.groupby(). One box-plot will be done per value of columns in by.

axobject of class matplotlib.axes.Axes, optional

The matplotlib axes to be used by boxplot.

fontsizefloat or str

Tick label font size in points or as a string (e.g., large).

rotint or float, default 0

The rotation angle of labels (in degrees) with respect to the screen coordinate system.

gridbool, default True

Setting this to True will show the grid.

figsizeA tuple (width, height) in inches

The size of the figure to create in matplotlib.

layouttuple (rows, columns), optional

For example, (3, 5) will display the subplots using 3 columns and 5 rows, starting from the top-left.

return_type{‘axes’, ‘dict’, ‘both’} or None, default ‘axes’

The kind of object to return. The default is axes.

  • ‘axes’ returns the matplotlib axes the boxplot is drawn on.

  • ‘dict’ returns a dictionary whose values are the matplotlib Lines of the boxplot.

  • ‘both’ returns a namedtuple with the axes and dict.

  • when grouping with by, a Series mapping columns to return_type is returned.

    If return_type is None, a NumPy array of axes with the same shape as layout is returned.

backendstr, default None

Backend to use instead of the backend specified in the option plotting.backend. For instance, ‘matplotlib’. Alternatively, to specify the plotting.backend for the whole session, set pd.options.plotting.backend.

New in version 1.0.0.

**kwargs

All other plotting keyword arguments to be passed to matplotlib.pyplot.boxplot().

result

See Notes.

Series.plot.hist: Make a histogram. matplotlib.pyplot.boxplot : Matplotlib equivalent plot.

The return type depends on the return_type parameter:

  • ‘axes’ : object of class matplotlib.axes.Axes

  • ‘dict’ : dict of matplotlib.lines.Line2D objects

  • ‘both’ : a namedtuple with structure (ax, lines)

For data grouped with by, return a Series of the above or a numpy array:

  • Series

  • array (for return_type = None)

Use return_type='dict' when you want to tweak the appearance of the lines after plotting. In this case a dict containing the Lines making up the boxes, caps, fliers, medians, and whiskers is returned.

Boxplots can be created for every column in the dataframe by df.boxplot() or indicating the columns to be used:

Boxplots of variables distributions grouped by the values of a third variable can be created using the option by. For instance:

A list of strings (i.e. ['X', 'Y']) can be passed to boxplot in order to group the data by combination of the variables in the x-axis:

The layout of boxplot can be adjusted giving a tuple to layout:

Additional formatting can be done to the boxplot, like suppressing the grid (grid=False), rotating the labels in the x-axis (i.e. rot=45) or changing the fontsize (i.e. fontsize=15):

The parameter return_type can be used to select the type of element returned by boxplot. When return_type='axes' is selected, the matplotlib axes on which the boxplot is drawn are returned:

>>> boxplot = df.boxplot(column=['Col1', 'Col2'], return_type='axes')
>>> type(boxplot)
<class 'matplotlib.axes._subplots.AxesSubplot'>

When grouping with by, a Series mapping columns to return_type is returned:

>>> boxplot = df.boxplot(column=['Col1', 'Col2'], by='X',
...                      return_type='axes')
>>> type(boxplot)
<class 'pandas.core.series.Series'>

If return_type is None, a NumPy array of axes with the same shape as layout is returned:

>>> boxplot = df.boxplot(column=['Col1', 'Col2'], by='X',
...                      return_type=None)
>>> type(boxplot)
<class 'numpy.ndarray'>
add(other, axis='columns', level=None, fill_value=None)

Get Addition of dataframe and other, element-wise (binary operator add).

Equivalent to dataframe + other, but with support to substitute a fill_value for missing data in one of the inputs. With reverse version, radd.

Among flexible wrappers (add, sub, mul, div, mod, pow) to arithmetic operators: +, -, *, /, //, %, **.

otherscalar, sequence, Series, or DataFrame

Any single or multiple element data structure, or list-like object.

axis{0 or ‘index’, 1 or ‘columns’}

Whether to compare by the index (0 or ‘index’) or columns (1 or ‘columns’). For Series input, axis to match Series index on.

levelint or label

Broadcast across a level, matching Index values on the passed MultiIndex level.

fill_valuefloat or None, default None

Fill existing missing (NaN) values, and any new element needed for successful DataFrame alignment, with this value before computation. If data in both corresponding DataFrame locations is missing the result will be missing.

DataFrame

Result of the arithmetic operation.

DataFrame.add : Add DataFrames. DataFrame.sub : Subtract DataFrames. DataFrame.mul : Multiply DataFrames. DataFrame.div : Divide DataFrames (float division). DataFrame.truediv : Divide DataFrames (float division). DataFrame.floordiv : Divide DataFrames (integer division). DataFrame.mod : Calculate modulo (remainder after division). DataFrame.pow : Calculate exponential power.

Mismatched indices will be unioned together.

>>> df = pd.DataFrame({'angles': [0, 3, 4],
...                    'degrees': [360, 180, 360]},
...                   index=['circle', 'triangle', 'rectangle'])
>>> df
           angles  degrees
circle          0      360
triangle        3      180
rectangle       4      360

Add a scalar with operator version which return the same results.

>>> df + 1
           angles  degrees
circle          1      361
triangle        4      181
rectangle       5      361
>>> df.add(1)
           angles  degrees
circle          1      361
triangle        4      181
rectangle       5      361

Divide by constant with reverse version.

>>> df.div(10)
           angles  degrees
circle        0.0     36.0
triangle      0.3     18.0
rectangle     0.4     36.0
>>> df.rdiv(10)
             angles   degrees
circle          inf  0.027778
triangle   3.333333  0.055556
rectangle  2.500000  0.027778

Subtract a list and Series by axis with operator version.

>>> df - [1, 2]
           angles  degrees
circle         -1      358
triangle        2      178
rectangle       3      358
>>> df.sub([1, 2], axis='columns')
           angles  degrees
circle         -1      358
triangle        2      178
rectangle       3      358
>>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']),
...        axis='index')
           angles  degrees
circle         -1      359
triangle        2      179
rectangle       3      359

Multiply a DataFrame of different shape with operator version.

>>> other = pd.DataFrame({'angles': [0, 3, 4]},
...                      index=['circle', 'triangle', 'rectangle'])
>>> other
           angles
circle          0
triangle        3
rectangle       4
>>> df * other
           angles  degrees
circle          0      NaN
triangle        9      NaN
rectangle      16      NaN
>>> df.mul(other, fill_value=0)
           angles  degrees
circle          0      0.0
triangle        9      0.0
rectangle      16      0.0

Divide by a MultiIndex by level.

>>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6],
...                              'degrees': [360, 180, 360, 360, 540, 720]},
...                             index=[['A', 'A', 'A', 'B', 'B', 'B'],
...                                    ['circle', 'triangle', 'rectangle',
...                                     'square', 'pentagon', 'hexagon']])
>>> df_multindex
             angles  degrees
A circle          0      360
  triangle        3      180
  rectangle       4      360
B square          4      360
  pentagon        5      540
  hexagon         6      720
>>> df.div(df_multindex, level=1, fill_value=0)
             angles  degrees
A circle        NaN      1.0
  triangle      1.0      1.0
  rectangle     1.0      1.0
B square        0.0      0.0
  pentagon      0.0      0.0
  hexagon       0.0      0.0
all(axis=0, bool_only=None, skipna=True, level=None, **kwargs)

Return whether all elements are True, potentially over an axis.

Returns True unless there at least one element within a series or along a Dataframe axis that is False or equivalent (e.g. zero or empty).

axis{0 or ‘index’, 1 or ‘columns’, None}, default 0

Indicate which axis or axes should be reduced.

  • 0 / ‘index’ : reduce the index, return a Series whose index is the original column labels.

  • 1 / ‘columns’ : reduce the columns, return a Series whose index is the original index.

  • None : reduce all axes, return a scalar.

bool_onlybool, default None

Include only boolean columns. If None, will attempt to use everything, then use only boolean data. Not implemented for Series.

skipnabool, default True

Exclude NA/null values. If the entire row/column is NA and skipna is True, then the result will be True, as for an empty row/column. If skipna is False, then NA are treated as True, because these are not equal to zero.

levelint or level name, default None

If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a Series.

**kwargsany, default None

Additional keywords have no effect but might be accepted for compatibility with NumPy.

Series or DataFrame

If level is specified, then, DataFrame is returned; otherwise, Series is returned.

Series.all : Return True if all elements are True. DataFrame.any : Return True if one (or more) elements are True.

Series

>>> pd.Series([True, True]).all()
True
>>> pd.Series([True, False]).all()
False
>>> pd.Series([]).all()
True
>>> pd.Series([np.nan]).all()
True
>>> pd.Series([np.nan]).all(skipna=False)
True

DataFrames

Create a dataframe from a dictionary.

>>> df = pd.DataFrame({'col1': [True, True], 'col2': [True, False]})
>>> df
   col1   col2
0  True   True
1  True  False

Default behaviour checks if column-wise values all return True.

>>> df.all()
col1     True
col2    False
dtype: bool

Specify axis='columns' to check if row-wise values all return True.

>>> df.all(axis='columns')
0     True
1    False
dtype: bool

Or axis=None for whether every value is True.

>>> df.all(axis=None)
False
any(axis=0, bool_only=None, skipna=True, level=None, **kwargs)

Return whether any element is True, potentially over an axis.

Returns False unless there is at least one element within a series or along a Dataframe axis that is True or equivalent (e.g. non-zero or non-empty).

axis{0 or ‘index’, 1 or ‘columns’, None}, default 0

Indicate which axis or axes should be reduced.

  • 0 / ‘index’ : reduce the index, return a Series whose index is the original column labels.

  • 1 / ‘columns’ : reduce the columns, return a Series whose index is the original index.

  • None : reduce all axes, return a scalar.

bool_onlybool, default None

Include only boolean columns. If None, will attempt to use everything, then use only boolean data. Not implemented for Series.

skipnabool, default True

Exclude NA/null values. If the entire row/column is NA and skipna is True, then the result will be False, as for an empty row/column. If skipna is False, then NA are treated as True, because these are not equal to zero.

levelint or level name, default None

If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a Series.

**kwargsany, default None

Additional keywords have no effect but might be accepted for compatibility with NumPy.

Series or DataFrame

If level is specified, then, DataFrame is returned; otherwise, Series is returned.

numpy.any : Numpy version of this method. Series.any : Return whether any element is True. Series.all : Return whether all elements are True. DataFrame.any : Return whether any element is True over requested axis. DataFrame.all : Return whether all elements are True over requested axis.

Series

For Series input, the output is a scalar indicating whether any element is True.

>>> pd.Series([False, False]).any()
False
>>> pd.Series([True, False]).any()
True
>>> pd.Series([]).any()
False
>>> pd.Series([np.nan]).any()
False
>>> pd.Series([np.nan]).any(skipna=False)
True

DataFrame

Whether each column contains at least one True element (the default).

>>> df = pd.DataFrame({"A": [1, 2], "B": [0, 2], "C": [0, 0]})
>>> df
   A  B  C
0  1  0  0
1  2  2  0
>>> df.any()
A     True
B     True
C    False
dtype: bool

Aggregating over the columns.

>>> df = pd.DataFrame({"A": [True, False], "B": [1, 2]})
>>> df
       A  B
0   True  1
1  False  2
>>> df.any(axis='columns')
0    True
1    True
dtype: bool
>>> df = pd.DataFrame({"A": [True, False], "B": [1, 0]})
>>> df
       A  B
0   True  1
1  False  0
>>> df.any(axis='columns')
0    True
1    False
dtype: bool

Aggregating over the entire DataFrame with axis=None.

>>> df.any(axis=None)
True

any for an empty DataFrame is an empty Series.

>>> pd.DataFrame([]).any()
Series([], dtype: bool)
cummax(axis=None, skipna=True, *args, **kwargs)

Return cumulative maximum over a DataFrame or Series axis.

Returns a DataFrame or Series of the same size containing the cumulative maximum.

axis{0 or ‘index’, 1 or ‘columns’}, default 0

The index or the name of the axis. 0 is equivalent to None or ‘index’.

skipnabool, default True

Exclude NA/null values. If an entire row/column is NA, the result will be NA.

*args, **kwargs

Additional keywords have no effect but might be accepted for compatibility with NumPy.

Series or DataFrame

Return cumulative maximum of Series or DataFrame.

core.window.Expanding.maxSimilar functionality

but ignores NaN values.

DataFrame.maxReturn the maximum over

DataFrame axis.

DataFrame.cummax : Return cumulative maximum over DataFrame axis. DataFrame.cummin : Return cumulative minimum over DataFrame axis. DataFrame.cumsum : Return cumulative sum over DataFrame axis. DataFrame.cumprod : Return cumulative product over DataFrame axis.

Series

>>> s = pd.Series([2, np.nan, 5, -1, 0])
>>> s
0    2.0
1    NaN
2    5.0
3   -1.0
4    0.0
dtype: float64

By default, NA values are ignored.

>>> s.cummax()
0    2.0
1    NaN
2    5.0
3    5.0
4    5.0
dtype: float64

To include NA values in the operation, use skipna=False

>>> s.cummax(skipna=False)
0    2.0
1    NaN
2    NaN
3    NaN
4    NaN
dtype: float64

DataFrame

>>> df = pd.DataFrame([[2.0, 1.0],
...                    [3.0, np.nan],
...                    [1.0, 0.0]],
...                    columns=list('AB'))
>>> df
     A    B
0  2.0  1.0
1  3.0  NaN
2  1.0  0.0

By default, iterates over rows and finds the maximum in each column. This is equivalent to axis=None or axis='index'.

>>> df.cummax()
     A    B
0  2.0  1.0
1  3.0  NaN
2  3.0  1.0

To iterate over columns and find the maximum in each row, use axis=1

>>> df.cummax(axis=1)
     A    B
0  2.0  2.0
1  3.0  NaN
2  1.0  1.0
cummin(axis=None, skipna=True, *args, **kwargs)

Return cumulative minimum over a DataFrame or Series axis.

Returns a DataFrame or Series of the same size containing the cumulative minimum.

axis{0 or ‘index’, 1 or ‘columns’}, default 0

The index or the name of the axis. 0 is equivalent to None or ‘index’.

skipnabool, default True

Exclude NA/null values. If an entire row/column is NA, the result will be NA.

*args, **kwargs

Additional keywords have no effect but might be accepted for compatibility with NumPy.

Series or DataFrame

Return cumulative minimum of Series or DataFrame.

core.window.Expanding.minSimilar functionality

but ignores NaN values.

DataFrame.minReturn the minimum over

DataFrame axis.

DataFrame.cummax : Return cumulative maximum over DataFrame axis. DataFrame.cummin : Return cumulative minimum over DataFrame axis. DataFrame.cumsum : Return cumulative sum over DataFrame axis. DataFrame.cumprod : Return cumulative product over DataFrame axis.

Series

>>> s = pd.Series([2, np.nan, 5, -1, 0])
>>> s
0    2.0
1    NaN
2    5.0
3   -1.0
4    0.0
dtype: float64

By default, NA values are ignored.

>>> s.cummin()
0    2.0
1    NaN
2    2.0
3   -1.0
4   -1.0
dtype: float64

To include NA values in the operation, use skipna=False

>>> s.cummin(skipna=False)
0    2.0
1    NaN
2    NaN
3    NaN
4    NaN
dtype: float64

DataFrame

>>> df = pd.DataFrame([[2.0, 1.0],
...                    [3.0, np.nan],
...                    [1.0, 0.0]],
...                    columns=list('AB'))
>>> df
     A    B
0  2.0  1.0
1  3.0  NaN
2  1.0  0.0

By default, iterates over rows and finds the minimum in each column. This is equivalent to axis=None or axis='index'.

>>> df.cummin()
     A    B
0  2.0  1.0
1  2.0  NaN
2  1.0  0.0

To iterate over columns and find the minimum in each row, use axis=1

>>> df.cummin(axis=1)
     A    B
0  2.0  1.0
1  3.0  NaN
2  1.0  0.0
cumprod(axis=None, skipna=True, *args, **kwargs)

Return cumulative product over a DataFrame or Series axis.

Returns a DataFrame or Series of the same size containing the cumulative product.

axis{0 or ‘index’, 1 or ‘columns’}, default 0

The index or the name of the axis. 0 is equivalent to None or ‘index’.

skipnabool, default True

Exclude NA/null values. If an entire row/column is NA, the result will be NA.

*args, **kwargs

Additional keywords have no effect but might be accepted for compatibility with NumPy.

Series or DataFrame

Return cumulative product of Series or DataFrame.

core.window.Expanding.prodSimilar functionality

but ignores NaN values.

DataFrame.prodReturn the product over

DataFrame axis.

DataFrame.cummax : Return cumulative maximum over DataFrame axis. DataFrame.cummin : Return cumulative minimum over DataFrame axis. DataFrame.cumsum : Return cumulative sum over DataFrame axis. DataFrame.cumprod : Return cumulative product over DataFrame axis.

Series

>>> s = pd.Series([2, np.nan, 5, -1, 0])
>>> s
0    2.0
1    NaN
2    5.0
3   -1.0
4    0.0
dtype: float64

By default, NA values are ignored.

>>> s.cumprod()
0     2.0
1     NaN
2    10.0
3   -10.0
4    -0.0
dtype: float64

To include NA values in the operation, use skipna=False

>>> s.cumprod(skipna=False)
0    2.0
1    NaN
2    NaN
3    NaN
4    NaN
dtype: float64

DataFrame

>>> df = pd.DataFrame([[2.0, 1.0],
...                    [3.0, np.nan],
...                    [1.0, 0.0]],
...                    columns=list('AB'))
>>> df
     A    B
0  2.0  1.0
1  3.0  NaN
2  1.0  0.0

By default, iterates over rows and finds the product in each column. This is equivalent to axis=None or axis='index'.

>>> df.cumprod()
     A    B
0  2.0  1.0
1  6.0  NaN
2  6.0  0.0

To iterate over columns and find the product in each row, use axis=1

>>> df.cumprod(axis=1)
     A    B
0  2.0  2.0
1  3.0  NaN
2  1.0  0.0
cumsum(axis=None, skipna=True, *args, **kwargs)

Return cumulative sum over a DataFrame or Series axis.

Returns a DataFrame or Series of the same size containing the cumulative sum.

axis{0 or ‘index’, 1 or ‘columns’}, default 0

The index or the name of the axis. 0 is equivalent to None or ‘index’.

skipnabool, default True

Exclude NA/null values. If an entire row/column is NA, the result will be NA.

*args, **kwargs

Additional keywords have no effect but might be accepted for compatibility with NumPy.

Series or DataFrame

Return cumulative sum of Series or DataFrame.

core.window.Expanding.sumSimilar functionality

but ignores NaN values.

DataFrame.sumReturn the sum over

DataFrame axis.

DataFrame.cummax : Return cumulative maximum over DataFrame axis. DataFrame.cummin : Return cumulative minimum over DataFrame axis. DataFrame.cumsum : Return cumulative sum over DataFrame axis. DataFrame.cumprod : Return cumulative product over DataFrame axis.

Series

>>> s = pd.Series([2, np.nan, 5, -1, 0])
>>> s
0    2.0
1    NaN
2    5.0
3   -1.0
4    0.0
dtype: float64

By default, NA values are ignored.

>>> s.cumsum()
0    2.0
1    NaN
2    7.0
3    6.0
4    6.0
dtype: float64

To include NA values in the operation, use skipna=False

>>> s.cumsum(skipna=False)
0    2.0
1    NaN
2    NaN
3    NaN
4    NaN
dtype: float64

DataFrame

>>> df = pd.DataFrame([[2.0, 1.0],
...                    [3.0, np.nan],
...                    [1.0, 0.0]],
...                    columns=list('AB'))
>>> df
     A    B
0  2.0  1.0
1  3.0  NaN
2  1.0  0.0

By default, iterates over rows and finds the sum in each column. This is equivalent to axis=None or axis='index'.

>>> df.cumsum()
     A    B
0  2.0  1.0
1  5.0  NaN
2  6.0  1.0

To iterate over columns and find the sum in each row, use axis=1

>>> df.cumsum(axis=1)
     A    B
0  2.0  3.0
1  3.0  NaN
2  1.0  1.0
div(other, axis='columns', level=None, fill_value=None)

Get Floating division of dataframe and other, element-wise (binary operator truediv).

Equivalent to dataframe / other, but with support to substitute a fill_value for missing data in one of the inputs. With reverse version, rtruediv.

Among flexible wrappers (add, sub, mul, div, mod, pow) to arithmetic operators: +, -, *, /, //, %, **.

otherscalar, sequence, Series, or DataFrame

Any single or multiple element data structure, or list-like object.

axis{0 or ‘index’, 1 or ‘columns’}

Whether to compare by the index (0 or ‘index’) or columns (1 or ‘columns’). For Series input, axis to match Series index on.

levelint or label

Broadcast across a level, matching Index values on the passed MultiIndex level.

fill_valuefloat or None, default None

Fill existing missing (NaN) values, and any new element needed for successful DataFrame alignment, with this value before computation. If data in both corresponding DataFrame locations is missing the result will be missing.

DataFrame

Result of the arithmetic operation.

DataFrame.add : Add DataFrames. DataFrame.sub : Subtract DataFrames. DataFrame.mul : Multiply DataFrames. DataFrame.div : Divide DataFrames (float division). DataFrame.truediv : Divide DataFrames (float division). DataFrame.floordiv : Divide DataFrames (integer division). DataFrame.mod : Calculate modulo (remainder after division). DataFrame.pow : Calculate exponential power.

Mismatched indices will be unioned together.

>>> df = pd.DataFrame({'angles': [0, 3, 4],
...                    'degrees': [360, 180, 360]},
...                   index=['circle', 'triangle', 'rectangle'])
>>> df
           angles  degrees
circle          0      360
triangle        3      180
rectangle       4      360

Add a scalar with operator version which return the same results.

>>> df + 1
           angles  degrees
circle          1      361
triangle        4      181
rectangle       5      361
>>> df.add(1)
           angles  degrees
circle          1      361
triangle        4      181
rectangle       5      361

Divide by constant with reverse version.

>>> df.div(10)
           angles  degrees
circle        0.0     36.0
triangle      0.3     18.0
rectangle     0.4     36.0
>>> df.rdiv(10)
             angles   degrees
circle          inf  0.027778
triangle   3.333333  0.055556
rectangle  2.500000  0.027778

Subtract a list and Series by axis with operator version.

>>> df - [1, 2]
           angles  degrees
circle         -1      358
triangle        2      178
rectangle       3      358
>>> df.sub([1, 2], axis='columns')
           angles  degrees
circle         -1      358
triangle        2      178
rectangle       3      358
>>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']),
...        axis='index')
           angles  degrees
circle         -1      359
triangle        2      179
rectangle       3      359

Multiply a DataFrame of different shape with operator version.

>>> other = pd.DataFrame({'angles': [0, 3, 4]},
...                      index=['circle', 'triangle', 'rectangle'])
>>> other
           angles
circle          0
triangle        3
rectangle       4
>>> df * other
           angles  degrees
circle          0      NaN
triangle        9      NaN
rectangle      16      NaN
>>> df.mul(other, fill_value=0)
           angles  degrees
circle          0      0.0
triangle        9      0.0
rectangle      16      0.0

Divide by a MultiIndex by level.

>>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6],
...                              'degrees': [360, 180, 360, 360, 540, 720]},
...                             index=[['A', 'A', 'A', 'B', 'B', 'B'],
...                                    ['circle', 'triangle', 'rectangle',
...                                     'square', 'pentagon', 'hexagon']])
>>> df_multindex
             angles  degrees
A circle          0      360
  triangle        3      180
  rectangle       4      360
B square          4      360
  pentagon        5      540
  hexagon         6      720
>>> df.div(df_multindex, level=1, fill_value=0)
             angles  degrees
A circle        NaN      1.0
  triangle      1.0      1.0
  rectangle     1.0      1.0
B square        0.0      0.0
  pentagon      0.0      0.0
  hexagon       0.0      0.0
divide(other, axis='columns', level=None, fill_value=None)

Get Floating division of dataframe and other, element-wise (binary operator truediv).

Equivalent to dataframe / other, but with support to substitute a fill_value for missing data in one of the inputs. With reverse version, rtruediv.

Among flexible wrappers (add, sub, mul, div, mod, pow) to arithmetic operators: +, -, *, /, //, %, **.

otherscalar, sequence, Series, or DataFrame

Any single or multiple element data structure, or list-like object.

axis{0 or ‘index’, 1 or ‘columns’}

Whether to compare by the index (0 or ‘index’) or columns (1 or ‘columns’). For Series input, axis to match Series index on.

levelint or label

Broadcast across a level, matching Index values on the passed MultiIndex level.

fill_valuefloat or None, default None

Fill existing missing (NaN) values, and any new element needed for successful DataFrame alignment, with this value before computation. If data in both corresponding DataFrame locations is missing the result will be missing.

DataFrame

Result of the arithmetic operation.

DataFrame.add : Add DataFrames. DataFrame.sub : Subtract DataFrames. DataFrame.mul : Multiply DataFrames. DataFrame.div : Divide DataFrames (float division). DataFrame.truediv : Divide DataFrames (float division). DataFrame.floordiv : Divide DataFrames (integer division). DataFrame.mod : Calculate modulo (remainder after division). DataFrame.pow : Calculate exponential power.

Mismatched indices will be unioned together.

>>> df = pd.DataFrame({'angles': [0, 3, 4],
...                    'degrees': [360, 180, 360]},
...                   index=['circle', 'triangle', 'rectangle'])
>>> df
           angles  degrees
circle          0      360
triangle        3      180
rectangle       4      360

Add a scalar with operator version which return the same results.

>>> df + 1
           angles  degrees
circle          1      361
triangle        4      181
rectangle       5      361
>>> df.add(1)
           angles  degrees
circle          1      361
triangle        4      181
rectangle       5      361

Divide by constant with reverse version.

>>> df.div(10)
           angles  degrees
circle        0.0     36.0
triangle      0.3     18.0
rectangle     0.4     36.0
>>> df.rdiv(10)
             angles   degrees
circle          inf  0.027778
triangle   3.333333  0.055556
rectangle  2.500000  0.027778

Subtract a list and Series by axis with operator version.

>>> df - [1, 2]
           angles  degrees
circle         -1      358
triangle        2      178
rectangle       3      358
>>> df.sub([1, 2], axis='columns')
           angles  degrees
circle         -1      358
triangle        2      178
rectangle       3      358
>>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']),
...        axis='index')
           angles  degrees
circle         -1      359
triangle        2      179
rectangle       3      359

Multiply a DataFrame of different shape with operator version.

>>> other = pd.DataFrame({'angles': [0, 3, 4]},
...                      index=['circle', 'triangle', 'rectangle'])
>>> other
           angles
circle          0
triangle        3
rectangle       4
>>> df * other
           angles  degrees
circle          0      NaN
triangle        9      NaN
rectangle      16      NaN
>>> df.mul(other, fill_value=0)
           angles  degrees
circle          0      0.0
triangle        9      0.0
rectangle      16      0.0

Divide by a MultiIndex by level.

>>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6],
...                              'degrees': [360, 180, 360, 360, 540, 720]},
...                             index=[['A', 'A', 'A', 'B', 'B', 'B'],
...                                    ['circle', 'triangle', 'rectangle',
...                                     'square', 'pentagon', 'hexagon']])
>>> df_multindex
             angles  degrees
A circle          0      360
  triangle        3      180
  rectangle       4      360
B square          4      360
  pentagon        5      540
  hexagon         6      720
>>> df.div(df_multindex, level=1, fill_value=0)
             angles  degrees
A circle        NaN      1.0
  triangle      1.0      1.0
  rectangle     1.0      1.0
B square        0.0      0.0
  pentagon      0.0      0.0
  hexagon       0.0      0.0
eq(other, axis='columns', level=None)

Get Equal to of dataframe and other, element-wise (binary operator eq).

Among flexible wrappers (eq, ne, le, lt, ge, gt) to comparison operators.

Equivalent to ==, !=, <=, <, >=, > with support to choose axis (rows or columns) and level for comparison.

otherscalar, sequence, Series, or DataFrame

Any single or multiple element data structure, or list-like object.

axis{0 or ‘index’, 1 or ‘columns’}, default ‘columns’

Whether to compare by the index (0 or ‘index’) or columns (1 or ‘columns’).

levelint or label

Broadcast across a level, matching Index values on the passed MultiIndex level.

DataFrame of bool

Result of the comparison.

DataFrame.eq : Compare DataFrames for equality elementwise. DataFrame.ne : Compare DataFrames for inequality elementwise. DataFrame.le : Compare DataFrames for less than inequality

or equality elementwise.

DataFrame.ltCompare DataFrames for strictly less than

inequality elementwise.

DataFrame.geCompare DataFrames for greater than inequality

or equality elementwise.

DataFrame.gtCompare DataFrames for strictly greater than

inequality elementwise.

Mismatched indices will be unioned together. NaN values are considered different (i.e. NaN != NaN).

>>> df = pd.DataFrame({'cost': [250, 150, 100],
...                    'revenue': [100, 250, 300]},
...                   index=['A', 'B', 'C'])
>>> df
   cost  revenue
A   250      100
B   150      250
C   100      300

Comparison with a scalar, using either the operator or method:

>>> df == 100
    cost  revenue
A  False     True
B  False    False
C   True    False
>>> df.eq(100)
    cost  revenue
A  False     True
B  False    False
C   True    False

When other is a Series, the columns of a DataFrame are aligned with the index of other and broadcast:

>>> df != pd.Series([100, 250], index=["cost", "revenue"])
    cost  revenue
A   True     True
B   True    False
C  False     True

Use the method to control the broadcast axis:

>>> df.ne(pd.Series([100, 300], index=["A", "D"]), axis='index')
   cost  revenue
A  True    False
B  True     True
C  True     True
D  True     True

When comparing to an arbitrary sequence, the number of columns must match the number elements in other:

>>> df == [250, 100]
    cost  revenue
A   True     True
B  False    False
C  False    False

Use the method to control the axis:

>>> df.eq([250, 250, 100], axis='index')
    cost  revenue
A   True    False
B  False     True
C   True    False

Compare to a DataFrame of different shape.

>>> other = pd.DataFrame({'revenue': [300, 250, 100, 150]},
...                      index=['A', 'B', 'C', 'D'])
>>> other
   revenue
A      300
B      250
C      100
D      150
>>> df.gt(other)
    cost  revenue
A  False    False
B  False    False
C  False     True
D  False    False

Compare to a MultiIndex by level.

>>> df_multindex = pd.DataFrame({'cost': [250, 150, 100, 150, 300, 220],
...                              'revenue': [100, 250, 300, 200, 175, 225]},
...                             index=[['Q1', 'Q1', 'Q1', 'Q2', 'Q2', 'Q2'],
...                                    ['A', 'B', 'C', 'A', 'B', 'C']])
>>> df_multindex
      cost  revenue
Q1 A   250      100
   B   150      250
   C   100      300
Q2 A   150      200
   B   300      175
   C   220      225
>>> df.le(df_multindex, level=1)
       cost  revenue
Q1 A   True     True
   B   True     True
   C   True     True
Q2 A  False     True
   B   True    False
   C   True    False
floordiv(other, axis='columns', level=None, fill_value=None)

Get Integer division of dataframe and other, element-wise (binary operator floordiv).

Equivalent to dataframe // other, but with support to substitute a fill_value for missing data in one of the inputs. With reverse version, rfloordiv.

Among flexible wrappers (add, sub, mul, div, mod, pow) to arithmetic operators: +, -, *, /, //, %, **.

otherscalar, sequence, Series, or DataFrame

Any single or multiple element data structure, or list-like object.

axis{0 or ‘index’, 1 or ‘columns’}

Whether to compare by the index (0 or ‘index’) or columns (1 or ‘columns’). For Series input, axis to match Series index on.

levelint or label

Broadcast across a level, matching Index values on the passed MultiIndex level.

fill_valuefloat or None, default None

Fill existing missing (NaN) values, and any new element needed for successful DataFrame alignment, with this value before computation. If data in both corresponding DataFrame locations is missing the result will be missing.

DataFrame

Result of the arithmetic operation.

DataFrame.add : Add DataFrames. DataFrame.sub : Subtract DataFrames. DataFrame.mul : Multiply DataFrames. DataFrame.div : Divide DataFrames (float division). DataFrame.truediv : Divide DataFrames (float division). DataFrame.floordiv : Divide DataFrames (integer division). DataFrame.mod : Calculate modulo (remainder after division). DataFrame.pow : Calculate exponential power.

Mismatched indices will be unioned together.

>>> df = pd.DataFrame({'angles': [0, 3, 4],
...                    'degrees': [360, 180, 360]},
...                   index=['circle', 'triangle', 'rectangle'])
>>> df
           angles  degrees
circle          0      360
triangle        3      180
rectangle       4      360

Add a scalar with operator version which return the same results.

>>> df + 1
           angles  degrees
circle          1      361
triangle        4      181
rectangle       5      361
>>> df.add(1)
           angles  degrees
circle          1      361
triangle        4      181
rectangle       5      361

Divide by constant with reverse version.

>>> df.div(10)
           angles  degrees
circle        0.0     36.0
triangle      0.3     18.0
rectangle     0.4     36.0
>>> df.rdiv(10)
             angles   degrees
circle          inf  0.027778
triangle   3.333333  0.055556
rectangle  2.500000  0.027778

Subtract a list and Series by axis with operator version.

>>> df - [1, 2]
           angles  degrees
circle         -1      358
triangle        2      178
rectangle       3      358
>>> df.sub([1, 2], axis='columns')
           angles  degrees
circle         -1      358
triangle        2      178
rectangle       3      358
>>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']),
...        axis='index')
           angles  degrees
circle         -1      359
triangle        2      179
rectangle       3      359

Multiply a DataFrame of different shape with operator version.

>>> other = pd.DataFrame({'angles': [0, 3, 4]},
...                      index=['circle', 'triangle', 'rectangle'])
>>> other
           angles
circle          0
triangle        3
rectangle       4
>>> df * other
           angles  degrees
circle          0      NaN
triangle        9      NaN
rectangle      16      NaN
>>> df.mul(other, fill_value=0)
           angles  degrees
circle          0      0.0
triangle        9      0.0
rectangle      16      0.0

Divide by a MultiIndex by level.

>>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6],
...                              'degrees': [360, 180, 360, 360, 540, 720]},
...                             index=[['A', 'A', 'A', 'B', 'B', 'B'],
...                                    ['circle', 'triangle', 'rectangle',
...                                     'square', 'pentagon', 'hexagon']])
>>> df_multindex
             angles  degrees
A circle          0      360
  triangle        3      180
  rectangle       4      360
B square          4      360
  pentagon        5      540
  hexagon         6      720
>>> df.div(df_multindex, level=1, fill_value=0)
             angles  degrees
A circle        NaN      1.0
  triangle      1.0      1.0
  rectangle     1.0      1.0
B square        0.0      0.0
  pentagon      0.0      0.0
  hexagon       0.0      0.0
ge(other, axis='columns', level=None)

Get Greater than or equal to of dataframe and other, element-wise (binary operator ge).

Among flexible wrappers (eq, ne, le, lt, ge, gt) to comparison operators.

Equivalent to ==, !=, <=, <, >=, > with support to choose axis (rows or columns) and level for comparison.

otherscalar, sequence, Series, or DataFrame

Any single or multiple element data structure, or list-like object.

axis{0 or ‘index’, 1 or ‘columns’}, default ‘columns’

Whether to compare by the index (0 or ‘index’) or columns (1 or ‘columns’).

levelint or label

Broadcast across a level, matching Index values on the passed MultiIndex level.

DataFrame of bool

Result of the comparison.

DataFrame.eq : Compare DataFrames for equality elementwise. DataFrame.ne : Compare DataFrames for inequality elementwise. DataFrame.le : Compare DataFrames for less than inequality

or equality elementwise.

DataFrame.ltCompare DataFrames for strictly less than

inequality elementwise.

DataFrame.geCompare DataFrames for greater than inequality

or equality elementwise.

DataFrame.gtCompare DataFrames for strictly greater than

inequality elementwise.

Mismatched indices will be unioned together. NaN values are considered different (i.e. NaN != NaN).

>>> df = pd.DataFrame({'cost': [250, 150, 100],
...                    'revenue': [100, 250, 300]},
...                   index=['A', 'B', 'C'])
>>> df
   cost  revenue
A   250      100
B   150      250
C   100      300

Comparison with a scalar, using either the operator or method:

>>> df == 100
    cost  revenue
A  False     True
B  False    False
C   True    False
>>> df.eq(100)
    cost  revenue
A  False     True
B  False    False
C   True    False

When other is a Series, the columns of a DataFrame are aligned with the index of other and broadcast:

>>> df != pd.Series([100, 250], index=["cost", "revenue"])
    cost  revenue
A   True     True
B   True    False
C  False     True

Use the method to control the broadcast axis:

>>> df.ne(pd.Series([100, 300], index=["A", "D"]), axis='index')
   cost  revenue
A  True    False
B  True     True
C  True     True
D  True     True

When comparing to an arbitrary sequence, the number of columns must match the number elements in other:

>>> df == [250, 100]
    cost  revenue
A   True     True
B  False    False
C  False    False

Use the method to control the axis:

>>> df.eq([250, 250, 100], axis='index')
    cost  revenue
A   True    False
B  False     True
C   True    False

Compare to a DataFrame of different shape.

>>> other = pd.DataFrame({'revenue': [300, 250, 100, 150]},
...                      index=['A', 'B', 'C', 'D'])
>>> other
   revenue
A      300
B      250
C      100
D      150
>>> df.gt(other)
    cost  revenue
A  False    False
B  False    False
C  False     True
D  False    False

Compare to a MultiIndex by level.

>>> df_multindex = pd.DataFrame({'cost': [250, 150, 100, 150, 300, 220],
...                              'revenue': [100, 250, 300, 200, 175, 225]},
...                             index=[['Q1', 'Q1', 'Q1', 'Q2', 'Q2', 'Q2'],
...                                    ['A', 'B', 'C', 'A', 'B', 'C']])
>>> df_multindex
      cost  revenue
Q1 A   250      100
   B   150      250
   C   100      300
Q2 A   150      200
   B   300      175
   C   220      225
>>> df.le(df_multindex, level=1)
       cost  revenue
Q1 A   True     True
   B   True     True
   C   True     True
Q2 A  False     True
   B   True    False
   C   True    False
gt(other, axis='columns', level=None)

Get Greater than of dataframe and other, element-wise (binary operator gt).

Among flexible wrappers (eq, ne, le, lt, ge, gt) to comparison operators.

Equivalent to ==, !=, <=, <, >=, > with support to choose axis (rows or columns) and level for comparison.

otherscalar, sequence, Series, or DataFrame

Any single or multiple element data structure, or list-like object.

axis{0 or ‘index’, 1 or ‘columns’}, default ‘columns’

Whether to compare by the index (0 or ‘index’) or columns (1 or ‘columns’).

levelint or label

Broadcast across a level, matching Index values on the passed MultiIndex level.

DataFrame of bool

Result of the comparison.

DataFrame.eq : Compare DataFrames for equality elementwise. DataFrame.ne : Compare DataFrames for inequality elementwise. DataFrame.le : Compare DataFrames for less than inequality

or equality elementwise.

DataFrame.ltCompare DataFrames for strictly less than

inequality elementwise.

DataFrame.geCompare DataFrames for greater than inequality

or equality elementwise.

DataFrame.gtCompare DataFrames for strictly greater than

inequality elementwise.

Mismatched indices will be unioned together. NaN values are considered different (i.e. NaN != NaN).

>>> df = pd.DataFrame({'cost': [250, 150, 100],
...                    'revenue': [100, 250, 300]},
...                   index=['A', 'B', 'C'])
>>> df
   cost  revenue
A   250      100
B   150      250
C   100      300

Comparison with a scalar, using either the operator or method:

>>> df == 100
    cost  revenue
A  False     True
B  False    False
C   True    False
>>> df.eq(100)
    cost  revenue
A  False     True
B  False    False
C   True    False

When other is a Series, the columns of a DataFrame are aligned with the index of other and broadcast:

>>> df != pd.Series([100, 250], index=["cost", "revenue"])
    cost  revenue
A   True     True
B   True    False
C  False     True

Use the method to control the broadcast axis:

>>> df.ne(pd.Series([100, 300], index=["A", "D"]), axis='index')
   cost  revenue
A  True    False
B  True     True
C  True     True
D  True     True

When comparing to an arbitrary sequence, the number of columns must match the number elements in other:

>>> df == [250, 100]
    cost  revenue
A   True     True
B  False    False
C  False    False

Use the method to control the axis:

>>> df.eq([250, 250, 100], axis='index')
    cost  revenue
A   True    False
B  False     True
C   True    False

Compare to a DataFrame of different shape.

>>> other = pd.DataFrame({'revenue': [300, 250, 100, 150]},
...                      index=['A', 'B', 'C', 'D'])
>>> other
   revenue
A      300
B      250
C      100
D      150
>>> df.gt(other)
    cost  revenue
A  False    False
B  False    False
C  False     True
D  False    False

Compare to a MultiIndex by level.

>>> df_multindex = pd.DataFrame({'cost': [250, 150, 100, 150, 300, 220],
...                              'revenue': [100, 250, 300, 200, 175, 225]},
...                             index=[['Q1', 'Q1', 'Q1', 'Q2', 'Q2', 'Q2'],
...                                    ['A', 'B', 'C', 'A', 'B', 'C']])
>>> df_multindex
      cost  revenue
Q1 A   250      100
   B   150      250
   C   100      300
Q2 A   150      200
   B   300      175
   C   220      225
>>> df.le(df_multindex, level=1)
       cost  revenue
Q1 A   True     True
   B   True     True
   C   True     True
Q2 A  False     True
   B   True    False
   C   True    False
kurt(axis=None, skipna=None, level=None, numeric_only=None, **kwargs)

Return unbiased kurtosis over requested axis.

Kurtosis obtained using Fisher’s definition of kurtosis (kurtosis of normal == 0.0). Normalized by N-1.

axis{index (0), columns (1)}

Axis for the function to be applied on.

skipnabool, default True

Exclude NA/null values when computing the result.

levelint or level name, default None

If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a Series.

numeric_onlybool, default None

Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. Not implemented for Series.

**kwargs

Additional keyword arguments to be passed to the function.

Series or DataFrame (if level specified)

kurtosis(axis=None, skipna=None, level=None, numeric_only=None, **kwargs)

Return unbiased kurtosis over requested axis.

Kurtosis obtained using Fisher’s definition of kurtosis (kurtosis of normal == 0.0). Normalized by N-1.

axis{index (0), columns (1)}

Axis for the function to be applied on.

skipnabool, default True

Exclude NA/null values when computing the result.

levelint or level name, default None

If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a Series.

numeric_onlybool, default None

Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. Not implemented for Series.

**kwargs

Additional keyword arguments to be passed to the function.

Series or DataFrame (if level specified)

le(other, axis='columns', level=None)

Get Less than or equal to of dataframe and other, element-wise (binary operator le).

Among flexible wrappers (eq, ne, le, lt, ge, gt) to comparison operators.

Equivalent to ==, !=, <=, <, >=, > with support to choose axis (rows or columns) and level for comparison.

otherscalar, sequence, Series, or DataFrame

Any single or multiple element data structure, or list-like object.

axis{0 or ‘index’, 1 or ‘columns’}, default ‘columns’

Whether to compare by the index (0 or ‘index’) or columns (1 or ‘columns’).

levelint or label

Broadcast across a level, matching Index values on the passed MultiIndex level.

DataFrame of bool

Result of the comparison.

DataFrame.eq : Compare DataFrames for equality elementwise. DataFrame.ne : Compare DataFrames for inequality elementwise. DataFrame.le : Compare DataFrames for less than inequality

or equality elementwise.

DataFrame.ltCompare DataFrames for strictly less than

inequality elementwise.

DataFrame.geCompare DataFrames for greater than inequality

or equality elementwise.

DataFrame.gtCompare DataFrames for strictly greater than

inequality elementwise.

Mismatched indices will be unioned together. NaN values are considered different (i.e. NaN != NaN).

>>> df = pd.DataFrame({'cost': [250, 150, 100],
...                    'revenue': [100, 250, 300]},
...                   index=['A', 'B', 'C'])
>>> df
   cost  revenue
A   250      100
B   150      250
C   100      300

Comparison with a scalar, using either the operator or method:

>>> df == 100
    cost  revenue
A  False     True
B  False    False
C   True    False
>>> df.eq(100)
    cost  revenue
A  False     True
B  False    False
C   True    False

When other is a Series, the columns of a DataFrame are aligned with the index of other and broadcast:

>>> df != pd.Series([100, 250], index=["cost", "revenue"])
    cost  revenue
A   True     True
B   True    False
C  False     True

Use the method to control the broadcast axis:

>>> df.ne(pd.Series([100, 300], index=["A", "D"]), axis='index')
   cost  revenue
A  True    False
B  True     True
C  True     True
D  True     True

When comparing to an arbitrary sequence, the number of columns must match the number elements in other:

>>> df == [250, 100]
    cost  revenue
A   True     True
B  False    False
C  False    False

Use the method to control the axis:

>>> df.eq([250, 250, 100], axis='index')
    cost  revenue
A   True    False
B  False     True
C   True    False

Compare to a DataFrame of different shape.

>>> other = pd.DataFrame({'revenue': [300, 250, 100, 150]},
...                      index=['A', 'B', 'C', 'D'])
>>> other
   revenue
A      300
B      250
C      100
D      150
>>> df.gt(other)
    cost  revenue
A  False    False
B  False    False
C  False     True
D  False    False

Compare to a MultiIndex by level.

>>> df_multindex = pd.DataFrame({'cost': [250, 150, 100, 150, 300, 220],
...                              'revenue': [100, 250, 300, 200, 175, 225]},
...                             index=[['Q1', 'Q1', 'Q1', 'Q2', 'Q2', 'Q2'],
...                                    ['A', 'B', 'C', 'A', 'B', 'C']])
>>> df_multindex
      cost  revenue
Q1 A   250      100
   B   150      250
   C   100      300
Q2 A   150      200
   B   300      175
   C   220      225
>>> df.le(df_multindex, level=1)
       cost  revenue
Q1 A   True     True
   B   True     True
   C   True     True
Q2 A  False     True
   B   True    False
   C   True    False
lt(other, axis='columns', level=None)

Get Less than of dataframe and other, element-wise (binary operator lt).

Among flexible wrappers (eq, ne, le, lt, ge, gt) to comparison operators.

Equivalent to ==, !=, <=, <, >=, > with support to choose axis (rows or columns) and level for comparison.

otherscalar, sequence, Series, or DataFrame

Any single or multiple element data structure, or list-like object.

axis{0 or ‘index’, 1 or ‘columns’}, default ‘columns’

Whether to compare by the index (0 or ‘index’) or columns (1 or ‘columns’).

levelint or label

Broadcast across a level, matching Index values on the passed MultiIndex level.

DataFrame of bool

Result of the comparison.

DataFrame.eq : Compare DataFrames for equality elementwise. DataFrame.ne : Compare DataFrames for inequality elementwise. DataFrame.le : Compare DataFrames for less than inequality

or equality elementwise.

DataFrame.ltCompare DataFrames for strictly less than

inequality elementwise.

DataFrame.geCompare DataFrames for greater than inequality

or equality elementwise.

DataFrame.gtCompare DataFrames for strictly greater than

inequality elementwise.

Mismatched indices will be unioned together. NaN values are considered different (i.e. NaN != NaN).

>>> df = pd.DataFrame({'cost': [250, 150, 100],
...                    'revenue': [100, 250, 300]},
...                   index=['A', 'B', 'C'])
>>> df
   cost  revenue
A   250      100
B   150      250
C   100      300

Comparison with a scalar, using either the operator or method:

>>> df == 100
    cost  revenue
A  False     True
B  False    False
C   True    False
>>> df.eq(100)
    cost  revenue
A  False     True
B  False    False
C   True    False

When other is a Series, the columns of a DataFrame are aligned with the index of other and broadcast:

>>> df != pd.Series([100, 250], index=["cost", "revenue"])
    cost  revenue
A   True     True
B   True    False
C  False     True

Use the method to control the broadcast axis:

>>> df.ne(pd.Series([100, 300], index=["A", "D"]), axis='index')
   cost  revenue
A  True    False
B  True     True
C  True     True
D  True     True

When comparing to an arbitrary sequence, the number of columns must match the number elements in other:

>>> df == [250, 100]
    cost  revenue
A   True     True
B  False    False
C  False    False

Use the method to control the axis:

>>> df.eq([250, 250, 100], axis='index')
    cost  revenue
A   True    False
B  False     True
C   True    False

Compare to a DataFrame of different shape.

>>> other = pd.DataFrame({'revenue': [300, 250, 100, 150]},
...                      index=['A', 'B', 'C', 'D'])
>>> other
   revenue
A      300
B      250
C      100
D      150
>>> df.gt(other)
    cost  revenue
A  False    False
B  False    False
C  False     True
D  False    False

Compare to a MultiIndex by level.

>>> df_multindex = pd.DataFrame({'cost': [250, 150, 100, 150, 300, 220],
...                              'revenue': [100, 250, 300, 200, 175, 225]},
...                             index=[['Q1', 'Q1', 'Q1', 'Q2', 'Q2', 'Q2'],
...                                    ['A', 'B', 'C', 'A', 'B', 'C']])
>>> df_multindex
      cost  revenue
Q1 A   250      100
   B   150      250
   C   100      300
Q2 A   150      200
   B   300      175
   C   220      225
>>> df.le(df_multindex, level=1)
       cost  revenue
Q1 A   True     True
   B   True     True
   C   True     True
Q2 A  False     True
   B   True    False
   C   True    False
mad(axis=None, skipna=None, level=None)

Return the mean absolute deviation of the values over the requested axis.

axis{index (0), columns (1)}

Axis for the function to be applied on.

skipnabool, default None

Exclude NA/null values when computing the result.

levelint or level name, default None

If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a Series.

Series or DataFrame (if level specified)

max(axis=None, skipna=None, level=None, numeric_only=None, **kwargs)

Return the maximum of the values over the requested axis.

If you want the index of the maximum, use idxmax. This isthe equivalent of the numpy.ndarray method argmax.

axis{index (0), columns (1)}

Axis for the function to be applied on.

skipnabool, default True

Exclude NA/null values when computing the result.

levelint or level name, default None

If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a Series.

numeric_onlybool, default None

Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. Not implemented for Series.

**kwargs

Additional keyword arguments to be passed to the function.

Series or DataFrame (if level specified)

Series.sum : Return the sum. Series.min : Return the minimum. Series.max : Return the maximum. Series.idxmin : Return the index of the minimum. Series.idxmax : Return the index of the maximum. DataFrame.sum : Return the sum over the requested axis. DataFrame.min : Return the minimum over the requested axis. DataFrame.max : Return the maximum over the requested axis. DataFrame.idxmin : Return the index of the minimum over the requested axis. DataFrame.idxmax : Return the index of the maximum over the requested axis.

>>> idx = pd.MultiIndex.from_arrays([
...     ['warm', 'warm', 'cold', 'cold'],
...     ['dog', 'falcon', 'fish', 'spider']],
...     names=['blooded', 'animal'])
>>> s = pd.Series([4, 2, 0, 8], name='legs', index=idx)
>>> s
blooded  animal
warm     dog       4
         falcon    2
cold     fish      0
         spider    8
Name: legs, dtype: int64
>>> s.max()
8

Max using level names, as well as indices.

>>> s.max(level='blooded')
blooded
warm    4
cold    8
Name: legs, dtype: int64
>>> s.max(level=0)
blooded
warm    4
cold    8
Name: legs, dtype: int64
mean(axis=None, skipna=None, level=None, numeric_only=None, **kwargs)

Return the mean of the values over the requested axis.

axis{index (0), columns (1)}

Axis for the function to be applied on.

skipnabool, default True

Exclude NA/null values when computing the result.

levelint or level name, default None

If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a Series.

numeric_onlybool, default None

Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. Not implemented for Series.

**kwargs

Additional keyword arguments to be passed to the function.

Series or DataFrame (if level specified)

median(axis=None, skipna=None, level=None, numeric_only=None, **kwargs)

Return the median of the values over the requested axis.

axis{index (0), columns (1)}

Axis for the function to be applied on.

skipnabool, default True

Exclude NA/null values when computing the result.

levelint or level name, default None

If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a Series.

numeric_onlybool, default None

Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. Not implemented for Series.

**kwargs

Additional keyword arguments to be passed to the function.

Series or DataFrame (if level specified)

min(axis=None, skipna=None, level=None, numeric_only=None, **kwargs)

Return the minimum of the values over the requested axis.

If you want the index of the minimum, use idxmin. This isthe equivalent of the numpy.ndarray method argmin.

axis{index (0), columns (1)}

Axis for the function to be applied on.

skipnabool, default True

Exclude NA/null values when computing the result.

levelint or level name, default None

If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a Series.

numeric_onlybool, default None

Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. Not implemented for Series.

**kwargs

Additional keyword arguments to be passed to the function.

Series or DataFrame (if level specified)

Series.sum : Return the sum. Series.min : Return the minimum. Series.max : Return the maximum. Series.idxmin : Return the index of the minimum. Series.idxmax : Return the index of the maximum. DataFrame.sum : Return the sum over the requested axis. DataFrame.min : Return the minimum over the requested axis. DataFrame.max : Return the maximum over the requested axis. DataFrame.idxmin : Return the index of the minimum over the requested axis. DataFrame.idxmax : Return the index of the maximum over the requested axis.

>>> idx = pd.MultiIndex.from_arrays([
...     ['warm', 'warm', 'cold', 'cold'],
...     ['dog', 'falcon', 'fish', 'spider']],
...     names=['blooded', 'animal'])
>>> s = pd.Series([4, 2, 0, 8], name='legs', index=idx)
>>> s
blooded  animal
warm     dog       4
         falcon    2
cold     fish      0
         spider    8
Name: legs, dtype: int64
>>> s.min()
0

Min using level names, as well as indices.

>>> s.min(level='blooded')
blooded
warm    2
cold    0
Name: legs, dtype: int64
>>> s.min(level=0)
blooded
warm    2
cold    0
Name: legs, dtype: int64
mod(other, axis='columns', level=None, fill_value=None)

Get Modulo of dataframe and other, element-wise (binary operator mod).

Equivalent to dataframe % other, but with support to substitute a fill_value for missing data in one of the inputs. With reverse version, rmod.

Among flexible wrappers (add, sub, mul, div, mod, pow) to arithmetic operators: +, -, *, /, //, %, **.

otherscalar, sequence, Series, or DataFrame

Any single or multiple element data structure, or list-like object.

axis{0 or ‘index’, 1 or ‘columns’}

Whether to compare by the index (0 or ‘index’) or columns (1 or ‘columns’). For Series input, axis to match Series index on.

levelint or label

Broadcast across a level, matching Index values on the passed MultiIndex level.

fill_valuefloat or None, default None

Fill existing missing (NaN) values, and any new element needed for successful DataFrame alignment, with this value before computation. If data in both corresponding DataFrame locations is missing the result will be missing.

DataFrame

Result of the arithmetic operation.

DataFrame.add : Add DataFrames. DataFrame.sub : Subtract DataFrames. DataFrame.mul : Multiply DataFrames. DataFrame.div : Divide DataFrames (float division). DataFrame.truediv : Divide DataFrames (float division). DataFrame.floordiv : Divide DataFrames (integer division). DataFrame.mod : Calculate modulo (remainder after division). DataFrame.pow : Calculate exponential power.

Mismatched indices will be unioned together.

>>> df = pd.DataFrame({'angles': [0, 3, 4],
...                    'degrees': [360, 180, 360]},
...                   index=['circle', 'triangle', 'rectangle'])
>>> df
           angles  degrees
circle          0      360
triangle        3      180
rectangle       4      360

Add a scalar with operator version which return the same results.

>>> df + 1
           angles  degrees
circle          1      361
triangle        4      181
rectangle       5      361
>>> df.add(1)
           angles  degrees
circle          1      361
triangle        4      181
rectangle       5      361

Divide by constant with reverse version.

>>> df.div(10)
           angles  degrees
circle        0.0     36.0
triangle      0.3     18.0
rectangle     0.4     36.0
>>> df.rdiv(10)
             angles   degrees
circle          inf  0.027778
triangle   3.333333  0.055556
rectangle  2.500000  0.027778

Subtract a list and Series by axis with operator version.

>>> df - [1, 2]
           angles  degrees
circle         -1      358
triangle        2      178
rectangle       3      358
>>> df.sub([1, 2], axis='columns')
           angles  degrees
circle         -1      358
triangle        2      178
rectangle       3      358
>>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']),
...        axis='index')
           angles  degrees
circle         -1      359
triangle        2      179
rectangle       3      359

Multiply a DataFrame of different shape with operator version.

>>> other = pd.DataFrame({'angles': [0, 3, 4]},
...                      index=['circle', 'triangle', 'rectangle'])
>>> other
           angles
circle          0
triangle        3
rectangle       4
>>> df * other
           angles  degrees
circle          0      NaN
triangle        9      NaN
rectangle      16      NaN
>>> df.mul(other, fill_value=0)
           angles  degrees
circle          0      0.0
triangle        9      0.0
rectangle      16      0.0

Divide by a MultiIndex by level.

>>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6],
...                              'degrees': [360, 180, 360, 360, 540, 720]},
...                             index=[['A', 'A', 'A', 'B', 'B', 'B'],
...                                    ['circle', 'triangle', 'rectangle',
...                                     'square', 'pentagon', 'hexagon']])
>>> df_multindex
             angles  degrees
A circle          0      360
  triangle        3      180
  rectangle       4      360
B square          4      360
  pentagon        5      540
  hexagon         6      720
>>> df.div(df_multindex, level=1, fill_value=0)
             angles  degrees
A circle        NaN      1.0
  triangle      1.0      1.0
  rectangle     1.0      1.0
B square        0.0      0.0
  pentagon      0.0      0.0
  hexagon       0.0      0.0
mul(other, axis='columns', level=None, fill_value=None)

Get Multiplication of dataframe and other, element-wise (binary operator mul).

Equivalent to dataframe * other, but with support to substitute a fill_value for missing data in one of the inputs. With reverse version, rmul.

Among flexible wrappers (add, sub, mul, div, mod, pow) to arithmetic operators: +, -, *, /, //, %, **.

otherscalar, sequence, Series, or DataFrame

Any single or multiple element data structure, or list-like object.

axis{0 or ‘index’, 1 or ‘columns’}

Whether to compare by the index (0 or ‘index’) or columns (1 or ‘columns’). For Series input, axis to match Series index on.

levelint or label

Broadcast across a level, matching Index values on the passed MultiIndex level.

fill_valuefloat or None, default None

Fill existing missing (NaN) values, and any new element needed for successful DataFrame alignment, with this value before computation. If data in both corresponding DataFrame locations is missing the result will be missing.

DataFrame

Result of the arithmetic operation.

DataFrame.add : Add DataFrames. DataFrame.sub : Subtract DataFrames. DataFrame.mul : Multiply DataFrames. DataFrame.div : Divide DataFrames (float division). DataFrame.truediv : Divide DataFrames (float division). DataFrame.floordiv : Divide DataFrames (integer division). DataFrame.mod : Calculate modulo (remainder after division). DataFrame.pow : Calculate exponential power.

Mismatched indices will be unioned together.

>>> df = pd.DataFrame({'angles': [0, 3, 4],
...                    'degrees': [360, 180, 360]},
...                   index=['circle', 'triangle', 'rectangle'])
>>> df
           angles  degrees
circle          0      360
triangle        3      180
rectangle       4      360

Add a scalar with operator version which return the same results.

>>> df + 1
           angles  degrees
circle          1      361
triangle        4      181
rectangle       5      361
>>> df.add(1)
           angles  degrees
circle          1      361
triangle        4      181
rectangle       5      361

Divide by constant with reverse version.

>>> df.div(10)
           angles  degrees
circle        0.0     36.0
triangle      0.3     18.0
rectangle     0.4     36.0
>>> df.rdiv(10)
             angles   degrees
circle          inf  0.027778
triangle   3.333333  0.055556
rectangle  2.500000  0.027778

Subtract a list and Series by axis with operator version.

>>> df - [1, 2]
           angles  degrees
circle         -1      358
triangle        2      178
rectangle       3      358
>>> df.sub([1, 2], axis='columns')
           angles  degrees
circle         -1      358
triangle        2      178
rectangle       3      358
>>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']),
...        axis='index')
           angles  degrees
circle         -1      359
triangle        2      179
rectangle       3      359

Multiply a DataFrame of different shape with operator version.

>>> other = pd.DataFrame({'angles': [0, 3, 4]},
...                      index=['circle', 'triangle', 'rectangle'])
>>> other
           angles
circle          0
triangle        3
rectangle       4
>>> df * other
           angles  degrees
circle          0      NaN
triangle        9      NaN
rectangle      16      NaN
>>> df.mul(other, fill_value=0)
           angles  degrees
circle          0      0.0
triangle        9      0.0
rectangle      16      0.0

Divide by a MultiIndex by level.

>>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6],
...                              'degrees': [360, 180, 360, 360, 540, 720]},
...                             index=[['A', 'A', 'A', 'B', 'B', 'B'],
...                                    ['circle', 'triangle', 'rectangle',
...                                     'square', 'pentagon', 'hexagon']])
>>> df_multindex
             angles  degrees
A circle          0      360
  triangle        3      180
  rectangle       4      360
B square          4      360
  pentagon        5      540
  hexagon         6      720
>>> df.div(df_multindex, level=1, fill_value=0)
             angles  degrees
A circle        NaN      1.0
  triangle      1.0      1.0
  rectangle     1.0      1.0
B square        0.0      0.0
  pentagon      0.0      0.0
  hexagon       0.0      0.0
multiply(other, axis='columns', level=None, fill_value=None)

Get Multiplication of dataframe and other, element-wise (binary operator mul).

Equivalent to dataframe * other, but with support to substitute a fill_value for missing data in one of the inputs. With reverse version, rmul.

Among flexible wrappers (add, sub, mul, div, mod, pow) to arithmetic operators: +, -, *, /, //, %, **.

otherscalar, sequence, Series, or DataFrame

Any single or multiple element data structure, or list-like object.

axis{0 or ‘index’, 1 or ‘columns’}

Whether to compare by the index (0 or ‘index’) or columns (1 or ‘columns’). For Series input, axis to match Series index on.

levelint or label

Broadcast across a level, matching Index values on the passed MultiIndex level.

fill_valuefloat or None, default None

Fill existing missing (NaN) values, and any new element needed for successful DataFrame alignment, with this value before computation. If data in both corresponding DataFrame locations is missing the result will be missing.

DataFrame

Result of the arithmetic operation.

DataFrame.add : Add DataFrames. DataFrame.sub : Subtract DataFrames. DataFrame.mul : Multiply DataFrames. DataFrame.div : Divide DataFrames (float division). DataFrame.truediv : Divide DataFrames (float division). DataFrame.floordiv : Divide DataFrames (integer division). DataFrame.mod : Calculate modulo (remainder after division). DataFrame.pow : Calculate exponential power.

Mismatched indices will be unioned together.

>>> df = pd.DataFrame({'angles': [0, 3, 4],
...                    'degrees': [360, 180, 360]},
...                   index=['circle', 'triangle', 'rectangle'])
>>> df
           angles  degrees
circle          0      360
triangle        3      180
rectangle       4      360

Add a scalar with operator version which return the same results.

>>> df + 1
           angles  degrees
circle          1      361
triangle        4      181
rectangle       5      361
>>> df.add(1)
           angles  degrees
circle          1      361
triangle        4      181
rectangle       5      361

Divide by constant with reverse version.

>>> df.div(10)
           angles  degrees
circle        0.0     36.0
triangle      0.3     18.0
rectangle     0.4     36.0
>>> df.rdiv(10)
             angles   degrees
circle          inf  0.027778
triangle   3.333333  0.055556
rectangle  2.500000  0.027778

Subtract a list and Series by axis with operator version.

>>> df - [1, 2]
           angles  degrees
circle         -1      358
triangle        2      178
rectangle       3      358
>>> df.sub([1, 2], axis='columns')
           angles  degrees
circle         -1      358
triangle        2      178
rectangle       3      358
>>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']),
...        axis='index')
           angles  degrees
circle         -1      359
triangle        2      179
rectangle       3      359

Multiply a DataFrame of different shape with operator version.

>>> other = pd.DataFrame({'angles': [0, 3, 4]},
...                      index=['circle', 'triangle', 'rectangle'])
>>> other
           angles
circle          0
triangle        3
rectangle       4
>>> df * other
           angles  degrees
circle          0      NaN
triangle        9      NaN
rectangle      16      NaN
>>> df.mul(other, fill_value=0)
           angles  degrees
circle          0      0.0
triangle        9      0.0
rectangle      16      0.0

Divide by a MultiIndex by level.

>>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6],
...                              'degrees': [360, 180, 360, 360, 540, 720]},
...                             index=[['A', 'A', 'A', 'B', 'B', 'B'],
...                                    ['circle', 'triangle', 'rectangle',
...                                     'square', 'pentagon', 'hexagon']])
>>> df_multindex
             angles  degrees
A circle          0      360
  triangle        3      180
  rectangle       4      360
B square          4      360
  pentagon        5      540
  hexagon         6      720
>>> df.div(df_multindex, level=1, fill_value=0)
             angles  degrees
A circle        NaN      1.0
  triangle      1.0      1.0
  rectangle     1.0      1.0
B square        0.0      0.0
  pentagon      0.0      0.0
  hexagon       0.0      0.0
ne(other, axis='columns', level=None)

Get Not equal to of dataframe and other, element-wise (binary operator ne).

Among flexible wrappers (eq, ne, le, lt, ge, gt) to comparison operators.

Equivalent to ==, !=, <=, <, >=, > with support to choose axis (rows or columns) and level for comparison.

otherscalar, sequence, Series, or DataFrame

Any single or multiple element data structure, or list-like object.

axis{0 or ‘index’, 1 or ‘columns’}, default ‘columns’

Whether to compare by the index (0 or ‘index’) or columns (1 or ‘columns’).

levelint or label

Broadcast across a level, matching Index values on the passed MultiIndex level.

DataFrame of bool

Result of the comparison.

DataFrame.eq : Compare DataFrames for equality elementwise. DataFrame.ne : Compare DataFrames for inequality elementwise. DataFrame.le : Compare DataFrames for less than inequality

or equality elementwise.

DataFrame.ltCompare DataFrames for strictly less than

inequality elementwise.

DataFrame.geCompare DataFrames for greater than inequality

or equality elementwise.

DataFrame.gtCompare DataFrames for strictly greater than

inequality elementwise.

Mismatched indices will be unioned together. NaN values are considered different (i.e. NaN != NaN).

>>> df = pd.DataFrame({'cost': [250, 150, 100],
...                    'revenue': [100, 250, 300]},
...                   index=['A', 'B', 'C'])
>>> df
   cost  revenue
A   250      100
B   150      250
C   100      300

Comparison with a scalar, using either the operator or method:

>>> df == 100
    cost  revenue
A  False     True
B  False    False
C   True    False
>>> df.eq(100)
    cost  revenue
A  False     True
B  False    False
C   True    False

When other is a Series, the columns of a DataFrame are aligned with the index of other and broadcast:

>>> df != pd.Series([100, 250], index=["cost", "revenue"])
    cost  revenue
A   True     True
B   True    False
C  False     True

Use the method to control the broadcast axis:

>>> df.ne(pd.Series([100, 300], index=["A", "D"]), axis='index')
   cost  revenue
A  True    False
B  True     True
C  True     True
D  True     True

When comparing to an arbitrary sequence, the number of columns must match the number elements in other:

>>> df == [250, 100]
    cost  revenue
A   True     True
B  False    False
C  False    False

Use the method to control the axis:

>>> df.eq([250, 250, 100], axis='index')
    cost  revenue
A   True    False
B  False     True
C   True    False

Compare to a DataFrame of different shape.

>>> other = pd.DataFrame({'revenue': [300, 250, 100, 150]},
...                      index=['A', 'B', 'C', 'D'])
>>> other
   revenue
A      300
B      250
C      100
D      150
>>> df.gt(other)
    cost  revenue
A  False    False
B  False    False
C  False     True
D  False    False

Compare to a MultiIndex by level.

>>> df_multindex = pd.DataFrame({'cost': [250, 150, 100, 150, 300, 220],
...                              'revenue': [100, 250, 300, 200, 175, 225]},
...                             index=[['Q1', 'Q1', 'Q1', 'Q2', 'Q2', 'Q2'],
...                                    ['A', 'B', 'C', 'A', 'B', 'C']])
>>> df_multindex
      cost  revenue
Q1 A   250      100
   B   150      250
   C   100      300
Q2 A   150      200
   B   300      175
   C   220      225
>>> df.le(df_multindex, level=1)
       cost  revenue
Q1 A   True     True
   B   True     True
   C   True     True
Q2 A  False     True
   B   True    False
   C   True    False
pow(other, axis='columns', level=None, fill_value=None)

Get Exponential power of dataframe and other, element-wise (binary operator pow).

Equivalent to dataframe ** other, but with support to substitute a fill_value for missing data in one of the inputs. With reverse version, rpow.

Among flexible wrappers (add, sub, mul, div, mod, pow) to arithmetic operators: +, -, *, /, //, %, **.

otherscalar, sequence, Series, or DataFrame

Any single or multiple element data structure, or list-like object.

axis{0 or ‘index’, 1 or ‘columns’}

Whether to compare by the index (0 or ‘index’) or columns (1 or ‘columns’). For Series input, axis to match Series index on.

levelint or label

Broadcast across a level, matching Index values on the passed MultiIndex level.

fill_valuefloat or None, default None

Fill existing missing (NaN) values, and any new element needed for successful DataFrame alignment, with this value before computation. If data in both corresponding DataFrame locations is missing the result will be missing.

DataFrame

Result of the arithmetic operation.

DataFrame.add : Add DataFrames. DataFrame.sub : Subtract DataFrames. DataFrame.mul : Multiply DataFrames. DataFrame.div : Divide DataFrames (float division). DataFrame.truediv : Divide DataFrames (float division). DataFrame.floordiv : Divide DataFrames (integer division). DataFrame.mod : Calculate modulo (remainder after division). DataFrame.pow : Calculate exponential power.

Mismatched indices will be unioned together.

>>> df = pd.DataFrame({'angles': [0, 3, 4],
...                    'degrees': [360, 180, 360]},
...                   index=['circle', 'triangle', 'rectangle'])
>>> df
           angles  degrees
circle          0      360
triangle        3      180
rectangle       4      360

Add a scalar with operator version which return the same results.

>>> df + 1
           angles  degrees
circle          1      361
triangle        4      181
rectangle       5      361
>>> df.add(1)
           angles  degrees
circle          1      361
triangle        4      181
rectangle       5      361

Divide by constant with reverse version.

>>> df.div(10)
           angles  degrees
circle        0.0     36.0
triangle      0.3     18.0
rectangle     0.4     36.0
>>> df.rdiv(10)
             angles   degrees
circle          inf  0.027778
triangle   3.333333  0.055556
rectangle  2.500000  0.027778

Subtract a list and Series by axis with operator version.

>>> df - [1, 2]
           angles  degrees
circle         -1      358
triangle        2      178
rectangle       3      358
>>> df.sub([1, 2], axis='columns')
           angles  degrees
circle         -1      358
triangle        2      178
rectangle       3      358
>>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']),
...        axis='index')
           angles  degrees
circle         -1      359
triangle        2      179
rectangle       3      359

Multiply a DataFrame of different shape with operator version.

>>> other = pd.DataFrame({'angles': [0, 3, 4]},
...                      index=['circle', 'triangle', 'rectangle'])
>>> other
           angles
circle          0
triangle        3
rectangle       4
>>> df * other
           angles  degrees
circle          0      NaN
triangle        9      NaN
rectangle      16      NaN
>>> df.mul(other, fill_value=0)
           angles  degrees
circle          0      0.0
triangle        9      0.0
rectangle      16      0.0

Divide by a MultiIndex by level.

>>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6],
...                              'degrees': [360, 180, 360, 360, 540, 720]},
...                             index=[['A', 'A', 'A', 'B', 'B', 'B'],
...                                    ['circle', 'triangle', 'rectangle',
...                                     'square', 'pentagon', 'hexagon']])
>>> df_multindex
             angles  degrees
A circle          0      360
  triangle        3      180
  rectangle       4      360
B square          4      360
  pentagon        5      540
  hexagon         6      720
>>> df.div(df_multindex, level=1, fill_value=0)
             angles  degrees
A circle        NaN      1.0
  triangle      1.0      1.0
  rectangle     1.0      1.0
B square        0.0      0.0
  pentagon      0.0      0.0
  hexagon       0.0      0.0
prod(axis=None, skipna=None, level=None, numeric_only=None, min_count=0, **kwargs)

Return the product of the values over the requested axis.

axis{index (0), columns (1)}

Axis for the function to be applied on.

skipnabool, default True

Exclude NA/null values when computing the result.

levelint or level name, default None

If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a Series.

numeric_onlybool, default None

Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. Not implemented for Series.

min_countint, default 0

The required number of valid values to perform the operation. If fewer than min_count non-NA values are present the result will be NA.

**kwargs

Additional keyword arguments to be passed to the function.

Series or DataFrame (if level specified)

Series.sum : Return the sum. Series.min : Return the minimum. Series.max : Return the maximum. Series.idxmin : Return the index of the minimum. Series.idxmax : Return the index of the maximum. DataFrame.sum : Return the sum over the requested axis. DataFrame.min : Return the minimum over the requested axis. DataFrame.max : Return the maximum over the requested axis. DataFrame.idxmin : Return the index of the minimum over the requested axis. DataFrame.idxmax : Return the index of the maximum over the requested axis.

By default, the product of an empty or all-NA Series is 1

>>> pd.Series([]).prod()
1.0

This can be controlled with the min_count parameter

>>> pd.Series([]).prod(min_count=1)
nan

Thanks to the skipna parameter, min_count handles all-NA and empty series identically.

>>> pd.Series([np.nan]).prod()
1.0
>>> pd.Series([np.nan]).prod(min_count=1)
nan
product(axis=None, skipna=None, level=None, numeric_only=None, min_count=0, **kwargs)

Return the product of the values over the requested axis.

axis{index (0), columns (1)}

Axis for the function to be applied on.

skipnabool, default True

Exclude NA/null values when computing the result.

levelint or level name, default None

If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a Series.

numeric_onlybool, default None

Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. Not implemented for Series.

min_countint, default 0

The required number of valid values to perform the operation. If fewer than min_count non-NA values are present the result will be NA.

**kwargs

Additional keyword arguments to be passed to the function.

Series or DataFrame (if level specified)

Series.sum : Return the sum. Series.min : Return the minimum. Series.max : Return the maximum. Series.idxmin : Return the index of the minimum. Series.idxmax : Return the index of the maximum. DataFrame.sum : Return the sum over the requested axis. DataFrame.min : Return the minimum over the requested axis. DataFrame.max : Return the maximum over the requested axis. DataFrame.idxmin : Return the index of the minimum over the requested axis. DataFrame.idxmax : Return the index of the maximum over the requested axis.

By default, the product of an empty or all-NA Series is 1

>>> pd.Series([]).prod()
1.0

This can be controlled with the min_count parameter

>>> pd.Series([]).prod(min_count=1)
nan

Thanks to the skipna parameter, min_count handles all-NA and empty series identically.

>>> pd.Series([np.nan]).prod()
1.0
>>> pd.Series([np.nan]).prod(min_count=1)
nan
radd(other, axis='columns', level=None, fill_value=None)

Get Addition of dataframe and other, element-wise (binary operator radd).

Equivalent to other + dataframe, but with support to substitute a fill_value for missing data in one of the inputs. With reverse version, add.

Among flexible wrappers (add, sub, mul, div, mod, pow) to arithmetic operators: +, -, *, /, //, %, **.

otherscalar, sequence, Series, or DataFrame

Any single or multiple element data structure, or list-like object.

axis{0 or ‘index’, 1 or ‘columns’}

Whether to compare by the index (0 or ‘index’) or columns (1 or ‘columns’). For Series input, axis to match Series index on.

levelint or label

Broadcast across a level, matching Index values on the passed MultiIndex level.

fill_valuefloat or None, default None

Fill existing missing (NaN) values, and any new element needed for successful DataFrame alignment, with this value before computation. If data in both corresponding DataFrame locations is missing the result will be missing.

DataFrame

Result of the arithmetic operation.

DataFrame.add : Add DataFrames. DataFrame.sub : Subtract DataFrames. DataFrame.mul : Multiply DataFrames. DataFrame.div : Divide DataFrames (float division). DataFrame.truediv : Divide DataFrames (float division). DataFrame.floordiv : Divide DataFrames (integer division). DataFrame.mod : Calculate modulo (remainder after division). DataFrame.pow : Calculate exponential power.

Mismatched indices will be unioned together.

>>> df = pd.DataFrame({'angles': [0, 3, 4],
...                    'degrees': [360, 180, 360]},
...                   index=['circle', 'triangle', 'rectangle'])
>>> df
           angles  degrees
circle          0      360
triangle        3      180
rectangle       4      360

Add a scalar with operator version which return the same results.

>>> df + 1
           angles  degrees
circle          1      361
triangle        4      181
rectangle       5      361
>>> df.add(1)
           angles  degrees
circle          1      361
triangle        4      181
rectangle       5      361

Divide by constant with reverse version.

>>> df.div(10)
           angles  degrees
circle        0.0     36.0
triangle      0.3     18.0
rectangle     0.4     36.0
>>> df.rdiv(10)
             angles   degrees
circle          inf  0.027778
triangle   3.333333  0.055556
rectangle  2.500000  0.027778

Subtract a list and Series by axis with operator version.

>>> df - [1, 2]
           angles  degrees
circle         -1      358
triangle        2      178
rectangle       3      358
>>> df.sub([1, 2], axis='columns')
           angles  degrees
circle         -1      358
triangle        2      178
rectangle       3      358
>>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']),
...        axis='index')
           angles  degrees
circle         -1      359
triangle        2      179
rectangle       3      359

Multiply a DataFrame of different shape with operator version.

>>> other = pd.DataFrame({'angles': [0, 3, 4]},
...                      index=['circle', 'triangle', 'rectangle'])
>>> other
           angles
circle          0
triangle        3
rectangle       4
>>> df * other
           angles  degrees
circle          0      NaN
triangle        9      NaN
rectangle      16      NaN
>>> df.mul(other, fill_value=0)
           angles  degrees
circle          0      0.0
triangle        9      0.0
rectangle      16      0.0

Divide by a MultiIndex by level.

>>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6],
...                              'degrees': [360, 180, 360, 360, 540, 720]},
...                             index=[['A', 'A', 'A', 'B', 'B', 'B'],
...                                    ['circle', 'triangle', 'rectangle',
...                                     'square', 'pentagon', 'hexagon']])
>>> df_multindex
             angles  degrees
A circle          0      360
  triangle        3      180
  rectangle       4      360
B square          4      360
  pentagon        5      540
  hexagon         6      720
>>> df.div(df_multindex, level=1, fill_value=0)
             angles  degrees
A circle        NaN      1.0
  triangle      1.0      1.0
  rectangle     1.0      1.0
B square        0.0      0.0
  pentagon      0.0      0.0
  hexagon       0.0      0.0
rdiv(other, axis='columns', level=None, fill_value=None)

Get Floating division of dataframe and other, element-wise (binary operator rtruediv).

Equivalent to other / dataframe, but with support to substitute a fill_value for missing data in one of the inputs. With reverse version, truediv.

Among flexible wrappers (add, sub, mul, div, mod, pow) to arithmetic operators: +, -, *, /, //, %, **.

otherscalar, sequence, Series, or DataFrame

Any single or multiple element data structure, or list-like object.

axis{0 or ‘index’, 1 or ‘columns’}

Whether to compare by the index (0 or ‘index’) or columns (1 or ‘columns’). For Series input, axis to match Series index on.

levelint or label

Broadcast across a level, matching Index values on the passed MultiIndex level.

fill_valuefloat or None, default None

Fill existing missing (NaN) values, and any new element needed for successful DataFrame alignment, with this value before computation. If data in both corresponding DataFrame locations is missing the result will be missing.

DataFrame

Result of the arithmetic operation.

DataFrame.add : Add DataFrames. DataFrame.sub : Subtract DataFrames. DataFrame.mul : Multiply DataFrames. DataFrame.div : Divide DataFrames (float division). DataFrame.truediv : Divide DataFrames (float division). DataFrame.floordiv : Divide DataFrames (integer division). DataFrame.mod : Calculate modulo (remainder after division). DataFrame.pow : Calculate exponential power.

Mismatched indices will be unioned together.

>>> df = pd.DataFrame({'angles': [0, 3, 4],
...                    'degrees': [360, 180, 360]},
...                   index=['circle', 'triangle', 'rectangle'])
>>> df
           angles  degrees
circle          0      360
triangle        3      180
rectangle       4      360

Add a scalar with operator version which return the same results.

>>> df + 1
           angles  degrees
circle          1      361
triangle        4      181
rectangle       5      361
>>> df.add(1)
           angles  degrees
circle          1      361
triangle        4      181
rectangle       5      361

Divide by constant with reverse version.

>>> df.div(10)
           angles  degrees
circle        0.0     36.0
triangle      0.3     18.0
rectangle     0.4     36.0
>>> df.rdiv(10)
             angles   degrees
circle          inf  0.027778
triangle   3.333333  0.055556
rectangle  2.500000  0.027778

Subtract a list and Series by axis with operator version.

>>> df - [1, 2]
           angles  degrees
circle         -1      358
triangle        2      178
rectangle       3      358
>>> df.sub([1, 2], axis='columns')
           angles  degrees
circle         -1      358
triangle        2      178
rectangle       3      358
>>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']),
...        axis='index')
           angles  degrees
circle         -1      359
triangle        2      179
rectangle       3      359

Multiply a DataFrame of different shape with operator version.

>>> other = pd.DataFrame({'angles': [0, 3, 4]},
...                      index=['circle', 'triangle', 'rectangle'])
>>> other
           angles
circle          0
triangle        3
rectangle       4
>>> df * other
           angles  degrees
circle          0      NaN
triangle        9      NaN
rectangle      16      NaN
>>> df.mul(other, fill_value=0)
           angles  degrees
circle          0      0.0
triangle        9      0.0
rectangle      16      0.0

Divide by a MultiIndex by level.

>>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6],
...                              'degrees': [360, 180, 360, 360, 540, 720]},
...                             index=[['A', 'A', 'A', 'B', 'B', 'B'],
...                                    ['circle', 'triangle', 'rectangle',
...                                     'square', 'pentagon', 'hexagon']])
>>> df_multindex
             angles  degrees
A circle          0      360
  triangle        3      180
  rectangle       4      360
B square          4      360
  pentagon        5      540
  hexagon         6      720
>>> df.div(df_multindex, level=1, fill_value=0)
             angles  degrees
A circle        NaN      1.0
  triangle      1.0      1.0
  rectangle     1.0      1.0
B square        0.0      0.0
  pentagon      0.0      0.0
  hexagon       0.0      0.0
rfloordiv(other, axis='columns', level=None, fill_value=None)

Get Integer division of dataframe and other, element-wise (binary operator rfloordiv).

Equivalent to other // dataframe, but with support to substitute a fill_value for missing data in one of the inputs. With reverse version, floordiv.

Among flexible wrappers (add, sub, mul, div, mod, pow) to arithmetic operators: +, -, *, /, //, %, **.

otherscalar, sequence, Series, or DataFrame

Any single or multiple element data structure, or list-like object.

axis{0 or ‘index’, 1 or ‘columns’}

Whether to compare by the index (0 or ‘index’) or columns (1 or ‘columns’). For Series input, axis to match Series index on.

levelint or label

Broadcast across a level, matching Index values on the passed MultiIndex level.

fill_valuefloat or None, default None

Fill existing missing (NaN) values, and any new element needed for successful DataFrame alignment, with this value before computation. If data in both corresponding DataFrame locations is missing the result will be missing.

DataFrame

Result of the arithmetic operation.

DataFrame.add : Add DataFrames. DataFrame.sub : Subtract DataFrames. DataFrame.mul : Multiply DataFrames. DataFrame.div : Divide DataFrames (float division). DataFrame.truediv : Divide DataFrames (float division). DataFrame.floordiv : Divide DataFrames (integer division). DataFrame.mod : Calculate modulo (remainder after division). DataFrame.pow : Calculate exponential power.

Mismatched indices will be unioned together.

>>> df = pd.DataFrame({'angles': [0, 3, 4],
...                    'degrees': [360, 180, 360]},
...                   index=['circle', 'triangle', 'rectangle'])
>>> df
           angles  degrees
circle          0      360
triangle        3      180
rectangle       4      360

Add a scalar with operator version which return the same results.

>>> df + 1
           angles  degrees
circle          1      361
triangle        4      181
rectangle       5      361
>>> df.add(1)
           angles  degrees
circle          1      361
triangle        4      181
rectangle       5      361

Divide by constant with reverse version.

>>> df.div(10)
           angles  degrees
circle        0.0     36.0
triangle      0.3     18.0
rectangle     0.4     36.0
>>> df.rdiv(10)
             angles   degrees
circle          inf  0.027778
triangle   3.333333  0.055556
rectangle  2.500000  0.027778

Subtract a list and Series by axis with operator version.

>>> df - [1, 2]
           angles  degrees
circle         -1      358
triangle        2      178
rectangle       3      358
>>> df.sub([1, 2], axis='columns')
           angles  degrees
circle         -1      358
triangle        2      178
rectangle       3      358
>>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']),
...        axis='index')
           angles  degrees
circle         -1      359
triangle        2      179
rectangle       3      359

Multiply a DataFrame of different shape with operator version.

>>> other = pd.DataFrame({'angles': [0, 3, 4]},
...                      index=['circle', 'triangle', 'rectangle'])
>>> other
           angles
circle          0
triangle        3
rectangle       4
>>> df * other
           angles  degrees
circle          0      NaN
triangle        9      NaN
rectangle      16      NaN
>>> df.mul(other, fill_value=0)
           angles  degrees
circle          0      0.0
triangle        9      0.0
rectangle      16      0.0

Divide by a MultiIndex by level.

>>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6],
...                              'degrees': [360, 180, 360, 360, 540, 720]},
...                             index=[['A', 'A', 'A', 'B', 'B', 'B'],
...                                    ['circle', 'triangle', 'rectangle',
...                                     'square', 'pentagon', 'hexagon']])
>>> df_multindex
             angles  degrees
A circle          0      360
  triangle        3      180
  rectangle       4      360
B square          4      360
  pentagon        5      540
  hexagon         6      720
>>> df.div(df_multindex, level=1, fill_value=0)
             angles  degrees
A circle        NaN      1.0
  triangle      1.0      1.0
  rectangle     1.0      1.0
B square        0.0      0.0
  pentagon      0.0      0.0
  hexagon       0.0      0.0
rmod(other, axis='columns', level=None, fill_value=None)

Get Modulo of dataframe and other, element-wise (binary operator rmod).

Equivalent to other % dataframe, but with support to substitute a fill_value for missing data in one of the inputs. With reverse version, mod.

Among flexible wrappers (add, sub, mul, div, mod, pow) to arithmetic operators: +, -, *, /, //, %, **.

otherscalar, sequence, Series, or DataFrame

Any single or multiple element data structure, or list-like object.

axis{0 or ‘index’, 1 or ‘columns’}

Whether to compare by the index (0 or ‘index’) or columns (1 or ‘columns’). For Series input, axis to match Series index on.

levelint or label

Broadcast across a level, matching Index values on the passed MultiIndex level.

fill_valuefloat or None, default None

Fill existing missing (NaN) values, and any new element needed for successful DataFrame alignment, with this value before computation. If data in both corresponding DataFrame locations is missing the result will be missing.

DataFrame

Result of the arithmetic operation.

DataFrame.add : Add DataFrames. DataFrame.sub : Subtract DataFrames. DataFrame.mul : Multiply DataFrames. DataFrame.div : Divide DataFrames (float division). DataFrame.truediv : Divide DataFrames (float division). DataFrame.floordiv : Divide DataFrames (integer division). DataFrame.mod : Calculate modulo (remainder after division). DataFrame.pow : Calculate exponential power.

Mismatched indices will be unioned together.

>>> df = pd.DataFrame({'angles': [0, 3, 4],
...                    'degrees': [360, 180, 360]},
...                   index=['circle', 'triangle', 'rectangle'])
>>> df
           angles  degrees
circle          0      360
triangle        3      180
rectangle       4      360

Add a scalar with operator version which return the same results.

>>> df + 1
           angles  degrees
circle          1      361
triangle        4      181
rectangle       5      361
>>> df.add(1)
           angles  degrees
circle          1      361
triangle        4      181
rectangle       5      361

Divide by constant with reverse version.

>>> df.div(10)
           angles  degrees
circle        0.0     36.0
triangle      0.3     18.0
rectangle     0.4     36.0
>>> df.rdiv(10)
             angles   degrees
circle          inf  0.027778
triangle   3.333333  0.055556
rectangle  2.500000  0.027778

Subtract a list and Series by axis with operator version.

>>> df - [1, 2]
           angles  degrees
circle         -1      358
triangle        2      178
rectangle       3      358
>>> df.sub([1, 2], axis='columns')
           angles  degrees
circle         -1      358
triangle        2      178
rectangle       3      358
>>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']),
...        axis='index')
           angles  degrees
circle         -1      359
triangle        2      179
rectangle       3      359

Multiply a DataFrame of different shape with operator version.

>>> other = pd.DataFrame({'angles': [0, 3, 4]},
...                      index=['circle', 'triangle', 'rectangle'])
>>> other
           angles
circle          0
triangle        3
rectangle       4
>>> df * other
           angles  degrees
circle          0      NaN
triangle        9      NaN
rectangle      16      NaN
>>> df.mul(other, fill_value=0)
           angles  degrees
circle          0      0.0
triangle        9      0.0
rectangle      16      0.0

Divide by a MultiIndex by level.

>>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6],
...                              'degrees': [360, 180, 360, 360, 540, 720]},
...                             index=[['A', 'A', 'A', 'B', 'B', 'B'],
...                                    ['circle', 'triangle', 'rectangle',
...                                     'square', 'pentagon', 'hexagon']])
>>> df_multindex
             angles  degrees
A circle          0      360
  triangle        3      180
  rectangle       4      360
B square          4      360
  pentagon        5      540
  hexagon         6      720
>>> df.div(df_multindex, level=1, fill_value=0)
             angles  degrees
A circle        NaN      1.0
  triangle      1.0      1.0
  rectangle     1.0      1.0
B square        0.0      0.0
  pentagon      0.0      0.0
  hexagon       0.0      0.0
rmul(other, axis='columns', level=None, fill_value=None)

Get Multiplication of dataframe and other, element-wise (binary operator rmul).

Equivalent to other * dataframe, but with support to substitute a fill_value for missing data in one of the inputs. With reverse version, mul.

Among flexible wrappers (add, sub, mul, div, mod, pow) to arithmetic operators: +, -, *, /, //, %, **.

otherscalar, sequence, Series, or DataFrame

Any single or multiple element data structure, or list-like object.

axis{0 or ‘index’, 1 or ‘columns’}

Whether to compare by the index (0 or ‘index’) or columns (1 or ‘columns’). For Series input, axis to match Series index on.

levelint or label

Broadcast across a level, matching Index values on the passed MultiIndex level.

fill_valuefloat or None, default None

Fill existing missing (NaN) values, and any new element needed for successful DataFrame alignment, with this value before computation. If data in both corresponding DataFrame locations is missing the result will be missing.

DataFrame

Result of the arithmetic operation.

DataFrame.add : Add DataFrames. DataFrame.sub : Subtract DataFrames. DataFrame.mul : Multiply DataFrames. DataFrame.div : Divide DataFrames (float division). DataFrame.truediv : Divide DataFrames (float division). DataFrame.floordiv : Divide DataFrames (integer division). DataFrame.mod : Calculate modulo (remainder after division). DataFrame.pow : Calculate exponential power.

Mismatched indices will be unioned together.

>>> df = pd.DataFrame({'angles': [0, 3, 4],
...                    'degrees': [360, 180, 360]},
...                   index=['circle', 'triangle', 'rectangle'])
>>> df
           angles  degrees
circle          0      360
triangle        3      180
rectangle       4      360

Add a scalar with operator version which return the same results.

>>> df + 1
           angles  degrees
circle          1      361
triangle        4      181
rectangle       5      361
>>> df.add(1)
           angles  degrees
circle          1      361
triangle        4      181
rectangle       5      361

Divide by constant with reverse version.

>>> df.div(10)
           angles  degrees
circle        0.0     36.0
triangle      0.3     18.0
rectangle     0.4     36.0
>>> df.rdiv(10)
             angles   degrees
circle          inf  0.027778
triangle   3.333333  0.055556
rectangle  2.500000  0.027778

Subtract a list and Series by axis with operator version.

>>> df - [1, 2]
           angles  degrees
circle         -1      358
triangle        2      178
rectangle       3      358
>>> df.sub([1, 2], axis='columns')
           angles  degrees
circle         -1      358
triangle        2      178
rectangle       3      358
>>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']),
...        axis='index')
           angles  degrees
circle         -1      359
triangle        2      179
rectangle       3      359

Multiply a DataFrame of different shape with operator version.

>>> other = pd.DataFrame({'angles': [0, 3, 4]},
...                      index=['circle', 'triangle', 'rectangle'])
>>> other
           angles
circle          0
triangle        3
rectangle       4
>>> df * other
           angles  degrees
circle          0      NaN
triangle        9      NaN
rectangle      16      NaN
>>> df.mul(other, fill_value=0)
           angles  degrees
circle          0      0.0
triangle        9      0.0
rectangle      16      0.0

Divide by a MultiIndex by level.

>>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6],
...                              'degrees': [360, 180, 360, 360, 540, 720]},
...                             index=[['A', 'A', 'A', 'B', 'B', 'B'],
...                                    ['circle', 'triangle', 'rectangle',
...                                     'square', 'pentagon', 'hexagon']])
>>> df_multindex
             angles  degrees
A circle          0      360
  triangle        3      180
  rectangle       4      360
B square          4      360
  pentagon        5      540
  hexagon         6      720
>>> df.div(df_multindex, level=1, fill_value=0)
             angles  degrees
A circle        NaN      1.0
  triangle      1.0      1.0
  rectangle     1.0      1.0
B square        0.0      0.0
  pentagon      0.0      0.0
  hexagon       0.0      0.0
rpow(other, axis='columns', level=None, fill_value=None)

Get Exponential power of dataframe and other, element-wise (binary operator rpow).

Equivalent to other ** dataframe, but with support to substitute a fill_value for missing data in one of the inputs. With reverse version, pow.

Among flexible wrappers (add, sub, mul, div, mod, pow) to arithmetic operators: +, -, *, /, //, %, **.

otherscalar, sequence, Series, or DataFrame

Any single or multiple element data structure, or list-like object.

axis{0 or ‘index’, 1 or ‘columns’}

Whether to compare by the index (0 or ‘index’) or columns (1 or ‘columns’). For Series input, axis to match Series index on.

levelint or label

Broadcast across a level, matching Index values on the passed MultiIndex level.

fill_valuefloat or None, default None

Fill existing missing (NaN) values, and any new element needed for successful DataFrame alignment, with this value before computation. If data in both corresponding DataFrame locations is missing the result will be missing.

DataFrame

Result of the arithmetic operation.

DataFrame.add : Add DataFrames. DataFrame.sub : Subtract DataFrames. DataFrame.mul : Multiply DataFrames. DataFrame.div : Divide DataFrames (float division). DataFrame.truediv : Divide DataFrames (float division). DataFrame.floordiv : Divide DataFrames (integer division). DataFrame.mod : Calculate modulo (remainder after division). DataFrame.pow : Calculate exponential power.

Mismatched indices will be unioned together.

>>> df = pd.DataFrame({'angles': [0, 3, 4],
...                    'degrees': [360, 180, 360]},
...                   index=['circle', 'triangle', 'rectangle'])
>>> df
           angles  degrees
circle          0      360
triangle        3      180
rectangle       4      360

Add a scalar with operator version which return the same results.

>>> df + 1
           angles  degrees
circle          1      361
triangle        4      181
rectangle       5      361
>>> df.add(1)
           angles  degrees
circle          1      361
triangle        4      181
rectangle       5      361

Divide by constant with reverse version.

>>> df.div(10)
           angles  degrees
circle        0.0     36.0
triangle      0.3     18.0
rectangle     0.4     36.0
>>> df.rdiv(10)
             angles   degrees
circle          inf  0.027778
triangle   3.333333  0.055556
rectangle  2.500000  0.027778

Subtract a list and Series by axis with operator version.

>>> df - [1, 2]
           angles  degrees
circle         -1      358
triangle        2      178
rectangle       3      358
>>> df.sub([1, 2], axis='columns')
           angles  degrees
circle         -1      358
triangle        2      178
rectangle       3      358
>>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']),
...        axis='index')
           angles  degrees
circle         -1      359
triangle        2      179
rectangle       3      359

Multiply a DataFrame of different shape with operator version.

>>> other = pd.DataFrame({'angles': [0, 3, 4]},
...                      index=['circle', 'triangle', 'rectangle'])
>>> other
           angles
circle          0
triangle        3
rectangle       4
>>> df * other
           angles  degrees
circle          0      NaN
triangle        9      NaN
rectangle      16      NaN
>>> df.mul(other, fill_value=0)
           angles  degrees
circle          0      0.0
triangle        9      0.0
rectangle      16      0.0

Divide by a MultiIndex by level.

>>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6],
...                              'degrees': [360, 180, 360, 360, 540, 720]},
...                             index=[['A', 'A', 'A', 'B', 'B', 'B'],
...                                    ['circle', 'triangle', 'rectangle',
...                                     'square', 'pentagon', 'hexagon']])
>>> df_multindex
             angles  degrees
A circle          0      360
  triangle        3      180
  rectangle       4      360
B square          4      360
  pentagon        5      540
  hexagon         6      720
>>> df.div(df_multindex, level=1, fill_value=0)
             angles  degrees
A circle        NaN      1.0
  triangle      1.0      1.0
  rectangle     1.0      1.0
B square        0.0      0.0
  pentagon      0.0      0.0
  hexagon       0.0      0.0
rsub(other, axis='columns', level=None, fill_value=None)

Get Subtraction of dataframe and other, element-wise (binary operator rsub).

Equivalent to other - dataframe, but with support to substitute a fill_value for missing data in one of the inputs. With reverse version, sub.

Among flexible wrappers (add, sub, mul, div, mod, pow) to arithmetic operators: +, -, *, /, //, %, **.

otherscalar, sequence, Series, or DataFrame

Any single or multiple element data structure, or list-like object.

axis{0 or ‘index’, 1 or ‘columns’}

Whether to compare by the index (0 or ‘index’) or columns (1 or ‘columns’). For Series input, axis to match Series index on.

levelint or label

Broadcast across a level, matching Index values on the passed MultiIndex level.

fill_valuefloat or None, default None

Fill existing missing (NaN) values, and any new element needed for successful DataFrame alignment, with this value before computation. If data in both corresponding DataFrame locations is missing the result will be missing.

DataFrame

Result of the arithmetic operation.

DataFrame.add : Add DataFrames. DataFrame.sub : Subtract DataFrames. DataFrame.mul : Multiply DataFrames. DataFrame.div : Divide DataFrames (float division). DataFrame.truediv : Divide DataFrames (float division). DataFrame.floordiv : Divide DataFrames (integer division). DataFrame.mod : Calculate modulo (remainder after division). DataFrame.pow : Calculate exponential power.

Mismatched indices will be unioned together.

>>> df = pd.DataFrame({'angles': [0, 3, 4],
...                    'degrees': [360, 180, 360]},
...                   index=['circle', 'triangle', 'rectangle'])
>>> df
           angles  degrees
circle          0      360
triangle        3      180
rectangle       4      360

Add a scalar with operator version which return the same results.

>>> df + 1
           angles  degrees
circle          1      361
triangle        4      181
rectangle       5      361
>>> df.add(1)
           angles  degrees
circle          1      361
triangle        4      181
rectangle       5      361

Divide by constant with reverse version.

>>> df.div(10)
           angles  degrees
circle        0.0     36.0
triangle      0.3     18.0
rectangle     0.4     36.0
>>> df.rdiv(10)
             angles   degrees
circle          inf  0.027778
triangle   3.333333  0.055556
rectangle  2.500000  0.027778

Subtract a list and Series by axis with operator version.

>>> df - [1, 2]
           angles  degrees
circle         -1      358
triangle        2      178
rectangle       3      358
>>> df.sub([1, 2], axis='columns')
           angles  degrees
circle         -1      358
triangle        2      178
rectangle       3      358
>>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']),
...        axis='index')
           angles  degrees
circle         -1      359
triangle        2      179
rectangle       3      359

Multiply a DataFrame of different shape with operator version.

>>> other = pd.DataFrame({'angles': [0, 3, 4]},
...                      index=['circle', 'triangle', 'rectangle'])
>>> other
           angles
circle          0
triangle        3
rectangle       4
>>> df * other
           angles  degrees
circle          0      NaN
triangle        9      NaN
rectangle      16      NaN
>>> df.mul(other, fill_value=0)
           angles  degrees
circle          0      0.0
triangle        9      0.0
rectangle      16      0.0

Divide by a MultiIndex by level.

>>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6],
...                              'degrees': [360, 180, 360, 360, 540, 720]},
...                             index=[['A', 'A', 'A', 'B', 'B', 'B'],
...                                    ['circle', 'triangle', 'rectangle',
...                                     'square', 'pentagon', 'hexagon']])
>>> df_multindex
             angles  degrees
A circle          0      360
  triangle        3      180
  rectangle       4      360
B square          4      360
  pentagon        5      540
  hexagon         6      720
>>> df.div(df_multindex, level=1, fill_value=0)
             angles  degrees
A circle        NaN      1.0
  triangle      1.0      1.0
  rectangle     1.0      1.0
B square        0.0      0.0
  pentagon      0.0      0.0
  hexagon       0.0      0.0
rtruediv(other, axis='columns', level=None, fill_value=None)

Get Floating division of dataframe and other, element-wise (binary operator rtruediv).

Equivalent to other / dataframe, but with support to substitute a fill_value for missing data in one of the inputs. With reverse version, truediv.

Among flexible wrappers (add, sub, mul, div, mod, pow) to arithmetic operators: +, -, *, /, //, %, **.

otherscalar, sequence, Series, or DataFrame

Any single or multiple element data structure, or list-like object.

axis{0 or ‘index’, 1 or ‘columns’}

Whether to compare by the index (0 or ‘index’) or columns (1 or ‘columns’). For Series input, axis to match Series index on.

levelint or label

Broadcast across a level, matching Index values on the passed MultiIndex level.

fill_valuefloat or None, default None

Fill existing missing (NaN) values, and any new element needed for successful DataFrame alignment, with this value before computation. If data in both corresponding DataFrame locations is missing the result will be missing.

DataFrame

Result of the arithmetic operation.

DataFrame.add : Add DataFrames. DataFrame.sub : Subtract DataFrames. DataFrame.mul : Multiply DataFrames. DataFrame.div : Divide DataFrames (float division). DataFrame.truediv : Divide DataFrames (float division). DataFrame.floordiv : Divide DataFrames (integer division). DataFrame.mod : Calculate modulo (remainder after division). DataFrame.pow : Calculate exponential power.

Mismatched indices will be unioned together.

>>> df = pd.DataFrame({'angles': [0, 3, 4],
...                    'degrees': [360, 180, 360]},
...                   index=['circle', 'triangle', 'rectangle'])
>>> df
           angles  degrees
circle          0      360
triangle        3      180
rectangle       4      360

Add a scalar with operator version which return the same results.

>>> df + 1
           angles  degrees
circle          1      361
triangle        4      181
rectangle       5      361
>>> df.add(1)
           angles  degrees
circle          1      361
triangle        4      181
rectangle       5      361

Divide by constant with reverse version.

>>> df.div(10)
           angles  degrees
circle        0.0     36.0
triangle      0.3     18.0
rectangle     0.4     36.0
>>> df.rdiv(10)
             angles   degrees
circle          inf  0.027778
triangle   3.333333  0.055556
rectangle  2.500000  0.027778

Subtract a list and Series by axis with operator version.

>>> df - [1, 2]
           angles  degrees
circle         -1      358
triangle        2      178
rectangle       3      358
>>> df.sub([1, 2], axis='columns')
           angles  degrees
circle         -1      358
triangle        2      178
rectangle       3      358
>>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']),
...        axis='index')
           angles  degrees
circle         -1      359
triangle        2      179
rectangle       3      359

Multiply a DataFrame of different shape with operator version.

>>> other = pd.DataFrame({'angles': [0, 3, 4]},
...                      index=['circle', 'triangle', 'rectangle'])
>>> other
           angles
circle          0
triangle        3
rectangle       4
>>> df * other
           angles  degrees
circle          0      NaN
triangle        9      NaN
rectangle      16      NaN
>>> df.mul(other, fill_value=0)
           angles  degrees
circle          0      0.0
triangle        9      0.0
rectangle      16      0.0

Divide by a MultiIndex by level.

>>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6],
...                              'degrees': [360, 180, 360, 360, 540, 720]},
...                             index=[['A', 'A', 'A', 'B', 'B', 'B'],
...                                    ['circle', 'triangle', 'rectangle',
...                                     'square', 'pentagon', 'hexagon']])
>>> df_multindex
             angles  degrees
A circle          0      360
  triangle        3      180
  rectangle       4      360
B square          4      360
  pentagon        5      540
  hexagon         6      720
>>> df.div(df_multindex, level=1, fill_value=0)
             angles  degrees
A circle        NaN      1.0
  triangle      1.0      1.0
  rectangle     1.0      1.0
B square        0.0      0.0
  pentagon      0.0      0.0
  hexagon       0.0      0.0
sem(axis=None, skipna=None, level=None, ddof=1, numeric_only=None, **kwargs)

Return unbiased standard error of the mean over requested axis.

Normalized by N-1 by default. This can be changed using the ddof argument

axis : {index (0), columns (1)} skipna : bool, default True

Exclude NA/null values. If an entire row/column is NA, the result will be NA.

levelint or level name, default None

If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a Series.

ddofint, default 1

Delta Degrees of Freedom. The divisor used in calculations is N - ddof, where N represents the number of elements.

numeric_onlybool, default None

Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. Not implemented for Series.

Series or DataFrame (if level specified)

To have the same behaviour as numpy.std, use ddof=0 (instead of the default ddof=1)

skew(axis=None, skipna=None, level=None, numeric_only=None, **kwargs)

Return unbiased skew over requested axis.

Normalized by N-1.

axis{index (0), columns (1)}

Axis for the function to be applied on.

skipnabool, default True

Exclude NA/null values when computing the result.

levelint or level name, default None

If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a Series.

numeric_onlybool, default None

Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. Not implemented for Series.

**kwargs

Additional keyword arguments to be passed to the function.

Series or DataFrame (if level specified)

sparse

alias of pandas.core.arrays.sparse.accessor.SparseFrameAccessor

std(axis=None, skipna=None, level=None, ddof=1, numeric_only=None, **kwargs)

Return sample standard deviation over requested axis.

Normalized by N-1 by default. This can be changed using the ddof argument

axis : {index (0), columns (1)} skipna : bool, default True

Exclude NA/null values. If an entire row/column is NA, the result will be NA.

levelint or level name, default None

If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a Series.

ddofint, default 1

Delta Degrees of Freedom. The divisor used in calculations is N - ddof, where N represents the number of elements.

numeric_onlybool, default None

Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. Not implemented for Series.

Series or DataFrame (if level specified)

To have the same behaviour as numpy.std, use ddof=0 (instead of the default ddof=1)

sub(other, axis='columns', level=None, fill_value=None)

Get Subtraction of dataframe and other, element-wise (binary operator sub).

Equivalent to dataframe - other, but with support to substitute a fill_value for missing data in one of the inputs. With reverse version, rsub.

Among flexible wrappers (add, sub, mul, div, mod, pow) to arithmetic operators: +, -, *, /, //, %, **.

otherscalar, sequence, Series, or DataFrame

Any single or multiple element data structure, or list-like object.

axis{0 or ‘index’, 1 or ‘columns’}

Whether to compare by the index (0 or ‘index’) or columns (1 or ‘columns’). For Series input, axis to match Series index on.

levelint or label

Broadcast across a level, matching Index values on the passed MultiIndex level.

fill_valuefloat or None, default None

Fill existing missing (NaN) values, and any new element needed for successful DataFrame alignment, with this value before computation. If data in both corresponding DataFrame locations is missing the result will be missing.

DataFrame

Result of the arithmetic operation.

DataFrame.add : Add DataFrames. DataFrame.sub : Subtract DataFrames. DataFrame.mul : Multiply DataFrames. DataFrame.div : Divide DataFrames (float division). DataFrame.truediv : Divide DataFrames (float division). DataFrame.floordiv : Divide DataFrames (integer division). DataFrame.mod : Calculate modulo (remainder after division). DataFrame.pow : Calculate exponential power.

Mismatched indices will be unioned together.

>>> df = pd.DataFrame({'angles': [0, 3, 4],
...                    'degrees': [360, 180, 360]},
...                   index=['circle', 'triangle', 'rectangle'])
>>> df
           angles  degrees
circle          0      360
triangle        3      180
rectangle       4      360

Add a scalar with operator version which return the same results.

>>> df + 1
           angles  degrees
circle          1      361
triangle        4      181
rectangle       5      361
>>> df.add(1)
           angles  degrees
circle          1      361
triangle        4      181
rectangle       5      361

Divide by constant with reverse version.

>>> df.div(10)
           angles  degrees
circle        0.0     36.0
triangle      0.3     18.0
rectangle     0.4     36.0
>>> df.rdiv(10)
             angles   degrees
circle          inf  0.027778
triangle   3.333333  0.055556
rectangle  2.500000  0.027778

Subtract a list and Series by axis with operator version.

>>> df - [1, 2]
           angles  degrees
circle         -1      358
triangle        2      178
rectangle       3      358
>>> df.sub([1, 2], axis='columns')
           angles  degrees
circle         -1      358
triangle        2      178
rectangle       3      358
>>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']),
...        axis='index')
           angles  degrees
circle         -1      359
triangle        2      179
rectangle       3      359

Multiply a DataFrame of different shape with operator version.

>>> other = pd.DataFrame({'angles': [0, 3, 4]},
...                      index=['circle', 'triangle', 'rectangle'])
>>> other
           angles
circle          0
triangle        3
rectangle       4
>>> df * other
           angles  degrees
circle          0      NaN
triangle        9      NaN
rectangle      16      NaN
>>> df.mul(other, fill_value=0)
           angles  degrees
circle          0      0.0
triangle        9      0.0
rectangle      16      0.0

Divide by a MultiIndex by level.

>>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6],
...                              'degrees': [360, 180, 360, 360, 540, 720]},
...                             index=[['A', 'A', 'A', 'B', 'B', 'B'],
...                                    ['circle', 'triangle', 'rectangle',
...                                     'square', 'pentagon', 'hexagon']])
>>> df_multindex
             angles  degrees
A circle          0      360
  triangle        3      180
  rectangle       4      360
B square          4      360
  pentagon        5      540
  hexagon         6      720
>>> df.div(df_multindex, level=1, fill_value=0)
             angles  degrees
A circle        NaN      1.0
  triangle      1.0      1.0
  rectangle     1.0      1.0
B square        0.0      0.0
  pentagon      0.0      0.0
  hexagon       0.0      0.0
subtract(other, axis='columns', level=None, fill_value=None)

Get Subtraction of dataframe and other, element-wise (binary operator sub).

Equivalent to dataframe - other, but with support to substitute a fill_value for missing data in one of the inputs. With reverse version, rsub.

Among flexible wrappers (add, sub, mul, div, mod, pow) to arithmetic operators: +, -, *, /, //, %, **.

otherscalar, sequence, Series, or DataFrame

Any single or multiple element data structure, or list-like object.

axis{0 or ‘index’, 1 or ‘columns’}

Whether to compare by the index (0 or ‘index’) or columns (1 or ‘columns’). For Series input, axis to match Series index on.

levelint or label

Broadcast across a level, matching Index values on the passed MultiIndex level.

fill_valuefloat or None, default None

Fill existing missing (NaN) values, and any new element needed for successful DataFrame alignment, with this value before computation. If data in both corresponding DataFrame locations is missing the result will be missing.

DataFrame

Result of the arithmetic operation.

DataFrame.add : Add DataFrames. DataFrame.sub : Subtract DataFrames. DataFrame.mul : Multiply DataFrames. DataFrame.div : Divide DataFrames (float division). DataFrame.truediv : Divide DataFrames (float division). DataFrame.floordiv : Divide DataFrames (integer division). DataFrame.mod : Calculate modulo (remainder after division). DataFrame.pow : Calculate exponential power.

Mismatched indices will be unioned together.

>>> df = pd.DataFrame({'angles': [0, 3, 4],
...                    'degrees': [360, 180, 360]},
...                   index=['circle', 'triangle', 'rectangle'])
>>> df
           angles  degrees
circle          0      360
triangle        3      180
rectangle       4      360

Add a scalar with operator version which return the same results.

>>> df + 1
           angles  degrees
circle          1      361
triangle        4      181
rectangle       5      361
>>> df.add(1)
           angles  degrees
circle          1      361
triangle        4      181
rectangle       5      361

Divide by constant with reverse version.

>>> df.div(10)
           angles  degrees
circle        0.0     36.0
triangle      0.3     18.0
rectangle     0.4     36.0
>>> df.rdiv(10)
             angles   degrees
circle          inf  0.027778
triangle   3.333333  0.055556
rectangle  2.500000  0.027778

Subtract a list and Series by axis with operator version.

>>> df - [1, 2]
           angles  degrees
circle         -1      358
triangle        2      178
rectangle       3      358
>>> df.sub([1, 2], axis='columns')
           angles  degrees
circle         -1      358
triangle        2      178
rectangle       3      358
>>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']),
...        axis='index')
           angles  degrees
circle         -1      359
triangle        2      179
rectangle       3      359

Multiply a DataFrame of different shape with operator version.

>>> other = pd.DataFrame({'angles': [0, 3, 4]},
...                      index=['circle', 'triangle', 'rectangle'])
>>> other
           angles
circle          0
triangle        3
rectangle       4
>>> df * other
           angles  degrees
circle          0      NaN
triangle        9      NaN
rectangle      16      NaN
>>> df.mul(other, fill_value=0)
           angles  degrees
circle          0      0.0
triangle        9      0.0
rectangle      16      0.0

Divide by a MultiIndex by level.

>>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6],
...                              'degrees': [360, 180, 360, 360, 540, 720]},
...                             index=[['A', 'A', 'A', 'B', 'B', 'B'],
...                                    ['circle', 'triangle', 'rectangle',
...                                     'square', 'pentagon', 'hexagon']])
>>> df_multindex
             angles  degrees
A circle          0      360
  triangle        3      180
  rectangle       4      360
B square          4      360
  pentagon        5      540
  hexagon         6      720
>>> df.div(df_multindex, level=1, fill_value=0)
             angles  degrees
A circle        NaN      1.0
  triangle      1.0      1.0
  rectangle     1.0      1.0
B square        0.0      0.0
  pentagon      0.0      0.0
  hexagon       0.0      0.0
sum(axis=None, skipna=None, level=None, numeric_only=None, min_count=0, **kwargs)

Return the sum of the values over the requested axis.

This is equivalent to the method numpy.sum.

axis{index (0), columns (1)}

Axis for the function to be applied on.

skipnabool, default True

Exclude NA/null values when computing the result.

levelint or level name, default None

If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a Series.

numeric_onlybool, default None

Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. Not implemented for Series.

min_countint, default 0

The required number of valid values to perform the operation. If fewer than min_count non-NA values are present the result will be NA.

**kwargs

Additional keyword arguments to be passed to the function.

Series or DataFrame (if level specified)

Series.sum : Return the sum. Series.min : Return the minimum. Series.max : Return the maximum. Series.idxmin : Return the index of the minimum. Series.idxmax : Return the index of the maximum. DataFrame.sum : Return the sum over the requested axis. DataFrame.min : Return the minimum over the requested axis. DataFrame.max : Return the maximum over the requested axis. DataFrame.idxmin : Return the index of the minimum over the requested axis. DataFrame.idxmax : Return the index of the maximum over the requested axis.

>>> idx = pd.MultiIndex.from_arrays([
...     ['warm', 'warm', 'cold', 'cold'],
...     ['dog', 'falcon', 'fish', 'spider']],
...     names=['blooded', 'animal'])
>>> s = pd.Series([4, 2, 0, 8], name='legs', index=idx)
>>> s
blooded  animal
warm     dog       4
         falcon    2
cold     fish      0
         spider    8
Name: legs, dtype: int64
>>> s.sum()
14

Sum using level names, as well as indices.

>>> s.sum(level='blooded')
blooded
warm    6
cold    8
Name: legs, dtype: int64
>>> s.sum(level=0)
blooded
warm    6
cold    8
Name: legs, dtype: int64

By default, the sum of an empty or all-NA Series is 0.

>>> pd.Series([]).sum()  # min_count=0 is the default
0.0

This can be controlled with the min_count parameter. For example, if you’d like the sum of an empty series to be NaN, pass min_count=1.

>>> pd.Series([]).sum(min_count=1)
nan

Thanks to the skipna parameter, min_count handles all-NA and empty series identically.

>>> pd.Series([np.nan]).sum()
0.0
>>> pd.Series([np.nan]).sum(min_count=1)
nan
truediv(other, axis='columns', level=None, fill_value=None)

Get Floating division of dataframe and other, element-wise (binary operator truediv).

Equivalent to dataframe / other, but with support to substitute a fill_value for missing data in one of the inputs. With reverse version, rtruediv.

Among flexible wrappers (add, sub, mul, div, mod, pow) to arithmetic operators: +, -, *, /, //, %, **.

otherscalar, sequence, Series, or DataFrame

Any single or multiple element data structure, or list-like object.

axis{0 or ‘index’, 1 or ‘columns’}

Whether to compare by the index (0 or ‘index’) or columns (1 or ‘columns’). For Series input, axis to match Series index on.

levelint or label

Broadcast across a level, matching Index values on the passed MultiIndex level.

fill_valuefloat or None, default None

Fill existing missing (NaN) values, and any new element needed for successful DataFrame alignment, with this value before computation. If data in both corresponding DataFrame locations is missing the result will be missing.

DataFrame

Result of the arithmetic operation.

DataFrame.add : Add DataFrames. DataFrame.sub : Subtract DataFrames. DataFrame.mul : Multiply DataFrames. DataFrame.div : Divide DataFrames (float division). DataFrame.truediv : Divide DataFrames (float division). DataFrame.floordiv : Divide DataFrames (integer division). DataFrame.mod : Calculate modulo (remainder after division). DataFrame.pow : Calculate exponential power.

Mismatched indices will be unioned together.

>>> df = pd.DataFrame({'angles': [0, 3, 4],
...                    'degrees': [360, 180, 360]},
...                   index=['circle', 'triangle', 'rectangle'])
>>> df
           angles  degrees
circle          0      360
triangle        3      180
rectangle       4      360

Add a scalar with operator version which return the same results.

>>> df + 1
           angles  degrees
circle          1      361
triangle        4      181
rectangle       5      361
>>> df.add(1)
           angles  degrees
circle          1      361
triangle        4      181
rectangle       5      361

Divide by constant with reverse version.

>>> df.div(10)
           angles  degrees
circle        0.0     36.0
triangle      0.3     18.0
rectangle     0.4     36.0
>>> df.rdiv(10)
             angles   degrees
circle          inf  0.027778
triangle   3.333333  0.055556
rectangle  2.500000  0.027778

Subtract a list and Series by axis with operator version.

>>> df - [1, 2]
           angles  degrees
circle         -1      358
triangle        2      178
rectangle       3      358
>>> df.sub([1, 2], axis='columns')
           angles  degrees
circle         -1      358
triangle        2      178
rectangle       3      358
>>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']),
...        axis='index')
           angles  degrees
circle         -1      359
triangle        2      179
rectangle       3      359

Multiply a DataFrame of different shape with operator version.

>>> other = pd.DataFrame({'angles': [0, 3, 4]},
...                      index=['circle', 'triangle', 'rectangle'])
>>> other
           angles
circle          0
triangle        3
rectangle       4
>>> df * other
           angles  degrees
circle          0      NaN
triangle        9      NaN
rectangle      16      NaN
>>> df.mul(other, fill_value=0)
           angles  degrees
circle          0      0.0
triangle        9      0.0
rectangle      16      0.0

Divide by a MultiIndex by level.

>>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6],
...                              'degrees': [360, 180, 360, 360, 540, 720]},
...                             index=[['A', 'A', 'A', 'B', 'B', 'B'],
...                                    ['circle', 'triangle', 'rectangle',
...                                     'square', 'pentagon', 'hexagon']])
>>> df_multindex
             angles  degrees
A circle          0      360
  triangle        3      180
  rectangle       4      360
B square          4      360
  pentagon        5      540
  hexagon         6      720
>>> df.div(df_multindex, level=1, fill_value=0)
             angles  degrees
A circle        NaN      1.0
  triangle      1.0      1.0
  rectangle     1.0      1.0
B square        0.0      0.0
  pentagon      0.0      0.0
  hexagon       0.0      0.0
var(axis=None, skipna=None, level=None, ddof=1, numeric_only=None, **kwargs)

Return unbiased variance over requested axis.

Normalized by N-1 by default. This can be changed using the ddof argument

axis : {index (0), columns (1)} skipna : bool, default True

Exclude NA/null values. If an entire row/column is NA, the result will be NA.

levelint or level name, default None

If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a Series.

ddofint, default 1

Delta Degrees of Freedom. The divisor used in calculations is N - ddof, where N represents the number of elements.

numeric_onlybool, default None

Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. Not implemented for Series.

Series or DataFrame (if level specified)

To have the same behaviour as numpy.std, use ddof=0 (instead of the default ddof=1)

omfit_dir

class omfit_classes.omfit_dir.OMFITdir(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.startup_framework.OMFITobject

OMFIT class used to interface with directories

Parameters
  • filename – directory path passed to OMFITobject class

  • extensions – dictionary mapping filename expression to OMFIT classes, for example: {‘.dat’: ‘OMFITnamelist’, ‘.xml’: ‘OMFITxml’}

  • **kw – keyword dictionary passed to OMFITobject class

update_dir()[source]

populate current object with folders and files from self.filename directory

add(key, obj)[source]

Deploy OMFITojbect obj to current OMFITdir directory

Parameters
  • key – key where to add the object (NOTE: this key can have / separators to indicate subdirectories)

  • obj – OMFITobject to add

Returns

OMFITobject deployed to directory

importDir(subfolder=None)[source]

This method adds the directory (possibly a specific subfolder) to the sys.path (i.e. PYTHONPATH) so that the python functions contained in this folder can be called from within OMFIT

Parameters

subfolder – subfolder under the OMFITdir object

Returns

None

omfit_dmp

class omfit_classes.omfit_dmp.OMFITdmp(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.startup_framework.OMFITobject

Class to satisfy DoE Data Managment Plan that requires the data used in a publication figure to be made available

from_fig(fig)[source]

Populate object and file from matplotlib figure

Parameters

fig – matplotlib figure

Returns

self

load()[source]
plot()[source]

generate plot based on dictionary content

Returns

matplotlib Figure handle

save()[source]

writes ditionary to HDF5

script()[source]
Returns

string with Python script to reproduce the figure (with DATA!)

OMFITpythonPlot(filename)[source]

generate OMFITpythonPlot script from figure (with DATA!)

Parameters

filename – filename for OMFITpythonPlot script

Returns

OMFITpythonPlot object

omfit_efit

class omfit_classes.omfit_efit.OMFITefitCDF(*args, **kwargs)[source]

Bases: omfit_classes.omfit_hdf5.OMFIThdf5

class used to interface CDF files generated by EFIT++

Parameters
  • filename – filename passed to OMFITascii class

  • **kw – keyword dictionary passed to OMFITascii class

property times
plot(time=None)[source]

Function used to plot input constraints and outputs

This method is called by .plot() when the object is a CDF file

Parameters

time – time in seconds

Returns

None

plot_trace(grpName, yname=None, xname='Radius [m]', tgtName='target', cmpName='computed', sigName='sigmas', title=None, ylim_min=None, ylim_max=None)[source]
Parameters
  • grpName – EFIT++ constraint dictionary name e.g. [input][constraints][grpName]

  • yname – y-axis figure title

  • xname – x-axis figure title

  • tgtName – EFIT++ target profile dictionary name e.g. [input][constraints][grpName][tgtName]

  • cmpName – EFIT++ computed profile dictionary name e.g. [input][constraints][grpName][cmpName]

  • sigName – EFIT++ sigma profile dictionary name e.g. [input][constraints][grpName][sigName]

  • title – figure title

  • ylim_min – y-axis figure minimum value

  • ylim_max – y-axis figure maximum value

Returns

plot_profile(grpName, time=None, yname=None, xname='Radius [m]', tgtName='target', cmpName='computed', sigName='sigmas', rName='r', title=None, ylim_min=None, ylim_max=None)[source]
Parameters
  • grpName – EFIT++ constraint dictionary name e.g. [input][constraints][grpName]

  • time – single time slice in seconds to plot profile data

  • yname – y-axis figure title

  • xname – x-axis figure title

  • tgtName – EFIT++ target profile dictionary name e.g. [input][constraints][grpName][tgtName]

  • cmpName – EFIT++ computed profile dictionary name e.g. [input][constraints][grpName][cmpName]

  • sigName – EFIT++ sigma profile dictionary name e.g. [input][constraints][grpName][sigName]

  • rName – EFIT++ radius profile dictionary name e.g. [input][constraints][grpName][rName]

  • title – figure title

  • ylim_min – y-axis figure minimum value

  • ylim_max – y-axis figure maximum value

Returns

omfit_classes.omfit_efit.available_EFITs(scratch_area, device, shot, allow_rdb=True, allow_mds=True, allow_guess=True, **kw)[source]

Attempts to look up a list of available EFITs using various sources

Parameters
  • scratch_area – dict Scratch area for storing results to reduce repeat calls.

  • device – str Device name

  • shot – int Shot number

  • allow_rdb – bool Allow connection to DIII-D RDB to gather EFIT information (only applicable for select devices) (First choice for supported devices)

  • allow_mds – bool Allow connection to MDSplus to gather EFIT information (only applicable to select devices) (First choice for non-RDB devices, second choice for devices that normally support RDB)

  • allow_guess – bool Allow guesses based on common patterns of EFIT availability on specific devices (Last resort, only if other options fail)

  • **kw

    Keywords passed to specific functions. Can include:

    param default_snap_list

    dict [optional] Default set of EFIT treenames. Newly discovered ones will be added to the list.

    param format

    str Instructions for formatting data to make the EFIT tag name. Provided for compatibility with available_efits_from_rdb() because the only option is ‘{tree}’.

Returns

(dict, str) Dictionary keys will be descriptions of the EFITs

Dictionary values will be the formatted identifiers. If lookup fails, the dictionary will be {‘’: ‘’} or will only contain default results, if any.

String will contain information about the discovered EFITs

omfit_efitviewer

Contains supporting functions to back the efitviewer GUI

omfit_classes.omfit_efitviewer.efitviewer()[source]

Shortcut for launching efitviewer

omfit_efund

class omfit_classes.omfit_efund.OMFITmhdin(*args, **kwargs)[source]

Bases: omfit_classes.omfit_namelist.OMFITnamelist

scaleSizes = 50
invalid = 99
load(*args, **kw)[source]

Load OMFITmhdin file

Parameters
  • *args – arguments passed to OMFITnamelist.load()

  • **kw – keyword arguments passed to OMFITnamelist.load()

save(*args, **kw)[source]

Save OMFITmhdin file

Parameters
  • *args – arguments passed to OMFITnamelist.save()

  • **kw – keyword arguments passed to OMFITnamelist.save()

static plot_coil(data, patch_facecolor='lightgray', patch_edgecolor='black', label=None, ax=None)[source]

plot individual coil

Parameters
  • data – FC, OH, VESSEL data array row

  • patch_facecolor – face color

  • patch_edgecolor – edge color

  • label – [True, False]

  • ax – axis

Returns

matplotlib rectangle patch

plot_flux_loops(display=None, colors=None, label=False, ax=None)[source]

plot the flux loops

Parameters
  • display – array used to turn on/off display individual flux loops

  • colors – array used to set the color of individual flux loops

  • label – [True, False]

  • ax – axis

plot_magnetic_probes(display=None, colors=None, label=False, ax=None)[source]

plot the magnetic probes

Parameters
  • display – array used to turn on/off the display of individual magnetic probes

  • colors – array used to set the color of individual magnetic probes

  • label – [True, False]

  • ax – axis

plot_poloidal_field_coils(edgecolor='none', facecolor='orange', label=False, ax=None)[source]

Plot poloidal field coils

Parameters
  • label – [True, False]

  • ax – axis

plot_ohmic_coils(edgecolor='none', facecolor='none', label=False, ax=None)[source]

Plot ohmic coils

Parameters
  • label – [True, False]

  • ax – axis

plot_vessel(edgecolor='none', facecolor='gray', label=False, ax=None)[source]

Plot vacuum vessel

Parameters
  • label – [True, False]

  • ax – axis

plot_system(system, edgecolor, facecolor, label=False, ax=None)[source]

Plot coil/vessel system

Parameters
  • system – [‘FC’, ‘OH’, ‘VESSEL’]

  • edgecolor – color of patch edges

  • facecolor – color of patch fill

  • label – [True, False]

  • ax – axis

plot_domain(ax=None)[source]

plot EFUND computation domain

Parameters

ax – axis

plot(label=False, plot_coils=True, plot_vessel=True, plot_measurements=True, plot_domain=True, ax=None)[source]

Composite plot

Parameters
  • label – label coils and measurements

  • plot_coils – plot poloidal field and oh coils

  • plot_vessel – plot conducting vessel

  • plot_measurements – plot flux loops and magnetic probes

  • plot_domain – plot EFUND computing domain

  • ax – axis

aggregate_oh_coils(index=None, group=None)[source]

Aggregate selected OH coils into a single coil

:param index of OH coils to aggregate

Parameters

group – group of OH coils to aggregate

disable_oh_group(group)[source]

remove OH group

Parameters

group – group of OH coils to disable

change_R(deltaR=0.0)[source]

add or subtract a deltaR to coils, flux loops and magnetic probes radial location effectively changing the aspect ratio

Parameters

deltaR – radial shift in m

change_Z(deltaZ=0.0)[source]

add or subtract a deltaZ to coils, flux loops and magnetic probes radial location effectively changing the aspect ratio

Parameters

deltaR – radial shift in m

scale_system(scale_factor=0.0)[source]

scale coils, flux loops and magnetic probes radial location effectively changing the major radius holding aspect ratio fixed

Parameters

scale_factor – scaling factor to multiple system by

fill_coils_from(mhdin)[source]

Copy FC, OH, VESSEL from passed object into current object, without changing the number of elements in current object.

This requires that the number of elements in the current object is greater or equal than the number of elements in the passed object. The extra elements in the current object will be placed at R=0, Z=0

Parameters

mhdin – other mhdin object

modify_vessel_elements(index, action='keep')[source]

Utility function to remove vessel elements

Parameters
  • index – index of the vessel elements to either keep or delete

  • action – can be either ‘keep’ or ‘delete’

fill_probes_loops_from(mhdin)[source]

Copy flux loops and magnetic probes from other object into current object, without changing the number of elements in current object

This requires that the number of elements in the current object is greater or equal than the number of elements in the passed object. The extra elements in the current object will be placed at R=0, Z=0

Parameters

mhdin – other mhdin object

fill_scalars_from(mhdin)[source]

copy scalar quantities in IN3 and IN5 namelists without overwriting [‘IFCOIL’, ‘IECOIL’, ‘IVESEL’]

Parameters

mhdin – other mhdin object

pretty_print(default_tilt2=0)[source]
efund_to_outline(coil_data, outline)[source]

The routine converts efund data format to ods outline format

param coil_data

6-index array, r,z,w,h,a1,a2

param outline

ods outline entry

return

outline

outline_to_efund(outline)[source]
The routine converts ods outline format to efund data format
Since efund only supports parallelograms and requires 2 sides

to be either vertical or horizontal this will likely not match the outline very well. Instead, the parallelogram will only match the angle of the lower left side, the height of the upper right side, and width of the the left most top side.

param outline

ods outline entry

return

6-index array, r,z,w,h,a1,a2

rectangle_to_efund(rectangle)[source]
annulus_to_efund(annulus)[source]
The routine converts an ods annulus format to efund data format

by approximating it as a square

param annulus

ods annulus entry

return

6-index array, r,z,w,h,a1,a2

thick_line_to_efund(thick_line)[source]
The routine converts ods thick_line format to efund data format
The only time a thick line is a valid shape in efund is when

it is vertical or horizontal. All others will not be a great fit, but some approximation is used.

param thick_line

ods thick_line entry

return

6-index array, r,z,w,h,a1,a2

annular_to_efund(annular)[source]
The routine converts ods annular format to efund data format
The only time annular segments are a valid shape in efund is when

they are vertical or horizontal. All others will not be a great fit, but some approximation is used.

param annular

ods annular entry

return

6-index array, r,z,w,h,a1,a2 in which each is an array over the number of segments

init_mhdin(device)[source]
from_omas(ods, passive_map='VS')[source]
to_omas(ods=None, update=['pf_active', 'flux_loop', 'b_field_pol_probe', 'vessel'])[source]

Transfers data in EFIT mhdin.dat format to ODS

WARNING: only rudimentary identifies are assigned for pf_active You should assign your own identifiers and only rely on this function to assign numerical geometry data.

Parameters
  • ods – ODS instance Data will be added in-place

  • update – systems to populate [‘oh’, ‘pf_active’, ‘flux_loop’, ‘b_field_pol_probe’] [‘magnetics’] will enable both [‘flux_loop’, ‘b_field_pol_probe’] NOTE that in IMAS the OH information goes under pf_active

Returns

ODS instance

from_miller(a=1.2, R=3.0, kappa=1.8, delta=0.4, zeta=0.0, zmag=0.0, nf=14, wf=0.05, hf=0.05, turns=100)[source]
fake_geqdsk(rbbbs, zbbbs, rlim, zlim, Bt, Ip, nw, nh)[source]

This function generates a fake geqdsk that can be used for fixed boundary EFIT modeling

Parameters
  • rbbbs – R of last closed flux surface [m]

  • zbbbs – Z of last closed flux surface [m]

  • rlim – R of limiter [m]

  • zlim – Z of limiter [m]

  • Bt – Central magnetic field [T]

  • Ip – Plasma current [A]

class omfit_classes.omfit_efund.OMFITdprobe(*args, **kwargs)[source]

Bases: omfit_classes.omfit_efund.OMFITmhdin

load(*args, **kw)[source]

Load OMFITmhdin file

Parameters
  • *args – arguments passed to OMFITnamelist.load()

  • **kw – keyword arguments passed to OMFITnamelist.load()

class omfit_classes.omfit_efund.OMFITnstxMHD(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

OMFIT class to read NSTX MHD device files such as device01152015.dat, diagSpec01152015.dat and signals_020916_PF4.dat

OMFIT class to parse NSTX MHD device files

Parameters
  • filename – filename

  • **kw – arguments passed to __init__ of OMFITascii

load()[source]
pretty_print()[source]

Print data in file as arrays, as it is needed for a fortran namelist

omfit_classes.omfit_efund.get_mhdindat(device=None, pulse=None, select_from_dict=None, filenames=['dprobe.dat', 'mhdin.dat'])[source]
Parameters
  • device – name of the device to get the mhdin.dat file of

  • pulse – for certain devices the mhdin.dat depends on the shot number

  • select_from_dict – select from external dictionary

  • filenames – filenames to get, typically ‘mhdin.dat’ and/or ‘dprobe.dat’ NOTE: ‘dprobe.dat’ is a subset of ‘mhdin.dat’

Returns

OMFITmhdin object

omfit_classes.omfit_efund.green_to_omas(ods=None, filedir='/fusion/projects/codes/efit/efitai/efit_support_files/DIII-D/green/168191/', nw=129, nh=129, nsilop=44, magpri=76, nesum=6, nfsum=18, nvsum=24)[source]

This function reads EFUND generate Green’s function tables and put them into IMAS

Parameters
  • ods – input ods to populate

  • filedir – directory which contains EFUND generated binary files

  • nw – number of horizontal grid points

  • nw – number of vertical grid points

  • magpri – number of magnetic probes (will be overwritten if available in directory)

  • nsilop – number of flux loops (will be overwritten if available in directory)

  • nesum – number of e-coils (will be overwritten if available in directory)

  • nfsum – number of f-coils (will be overwritten if available in directory)

  • nvsum – number of vessel structures (will be overwritten if available in directory)

returns ods

omfit_elite

class omfit_classes.omfit_elite.OMFITeliteGamma(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

ELITE growth rates data file (Gamma file)

load()[source]
class omfit_classes.omfit_elite.OMFITeliteEigenfunction(*args, **kwargs)[source]

Bases: omfit_classes.omfit_ascii.OMFITascii, omfit_classes.sortedDict.SortedDict

load()[source]
peak(decimate=1)[source]
envelope(decimate=1)[source]
plot(fancy=False, decimate=None)[source]
class omfit_classes.omfit_elite.OMFITelite2DEigenfunction(*args, **kwargs)[source]

Bases: omfit_classes.omfit_base.OMFITtree

Parameters
  • filename – ‘directory/bla/OMFITsave.txt’ or ‘directory/bla.zip’ where the OMFITtree will be saved (if ‘’ it will be saved in the same folder of the parent OMFITtree)

  • only – list of strings used to load only some of the branches from the tree (eg. [“[‘MainSettings’]”,”[‘myModule’][‘SCRIPTS’]”]

  • modifyOriginal – by default OMFIT will save a copy and then overwrite previous save only if successful. If modifyOriginal=True and filename is not .zip, will write data directly at destination, which will be faster but comes with the risk of deleting a good save if the new save fails for some reason

  • readOnly – will place entry in OMFITsave.txt of the parent so that this OMFITtree can be loaded, but will not save the actual content of this subtree. readOnly=True is meant to be used only after this subtree is deployed where its fileneme says it will be. Using this feature could result in much faster projects save if the content of this tree is large.

  • quiet – Verbosity level

  • developerMode – load OMFITpython objects within the tree as modifyOriginal

  • serverPicker – take server/tunnel info from MainSettings[‘SERVER’]

  • remote – access the filename in the remote directory

  • server – if specified the file will be downsync from the server

  • tunnel – access the filename via the tunnel

  • **kw – Extra keywords are passed to the SortedDict class

load()[source]

method for loading OMFITtree from disk

Parameters
  • filename – ‘directory/bla/OMFITsave.txt’ or ‘directory/bla.zip’ where the OMFITtree will be saved (if ‘’ it will be saved in the same folder of the parent OMFITtree)

  • only – list of strings used to load only some of the branches from the tree (eg. [“[‘MainSettings’]”,”[‘myModule’][‘SCRIPTS’]”]

  • modifyOriginal – by default OMFIT will save a copy and then overwrite previous save only if successful. If modifyOriginal=True and filename is not .zip, will write data directly at destination, which will be faster but comes with the risk of deleting a good save if the new save fails for some reason

  • readOnly – will place entry in OMFITsave.txt of the parent so that this OMFITtree can be loaded, but will not save the actual content of this subtree. readOnly=True is meant to be used only after this subtree is deployed where its fileneme says it will be. Using this feature could result in much faster projects save if the content of this tree is large.

  • quiet – Verbosity level

  • developerMode – load OMFITpython objects within the tree as modifyOriginal

  • lazyLoad – enable/disable lazy load of picklefiles and xarrays

save()[source]
plot()[source]
class omfit_classes.omfit_elite.OMFITelitefun2d(*args, **kwargs)[source]

Bases: omfit_classes.omfit_ascii.OMFITascii, omfit_classes.sortedDict.SortedDict

load()[source]
plot()[source]
class omfit_classes.omfit_elite.OMFITelitextopsi(*args, **kwargs)[source]

Bases: omfit_classes.omfit_ascii.OMFITascii, omfit_classes.sortedDict.SortedDict

load()[source]
class omfit_classes.omfit_elite.OMFITeliteAggregated(*args, **kwargs)[source]

Bases: omfit_classes.omfit_ascii.OMFITascii, omfit_classes.sortedDict.SortedDict

load()[source]
class omfit_classes.omfit_elite.OMFITeqdat(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

OMFIT class used to interface to equilibria files generated by ELITE and BALOO (.dskbal files)

NOTE: this object is “READ ONLY”, meaning that the changes to the entries of this object will not be saved to a file. Method .save() could be written if becomes necessary

Parameters
  • filename – filename passed to OMFITobject class

  • **kw – keyword dictionary passed to OMFITobject class

load()[source]
plot(**kw)[source]
class omfit_classes.omfit_elite.OMFITbalstab(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

OMFIT class used to interface to balstab file from BALOO

NOTE: this object is “READ ONLY”, meaning that the changes to the entries of this object will not be saved to a file. Method .save() could be written if becomes necessary

Parameters
  • filename – filename passed to OMFITobject class

  • **kw – keyword dictionary passed to OMFITobject class

load()[source]
class omfit_classes.omfit_elite.OMFIToutbal(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

OMFIT class used to interface to outbal file from BALOO

NOTE: this object is “READ ONLY”, meaning that the changes to the entries of this object will not be saved to a file. Method .save() could be written if becomes necessary

Parameters
  • filename – filename passed to OMFITobject class

  • **kw – keyword dictionary passed to OMFITobject class

load()[source]
plot(**kw)[source]

omfit_elm

Contains classes and functions to perform ELM detection and ELM filtering: - OMFITelm class; main workhorse for ELM detection - Some smoothing functions used by OMFITelm

The regression test for OMFITelm is in regression/test_OMFITelm.py

omfit_classes.omfit_elm.asym_gauss_smooth(x, y, s, lag, leading_side_width_factor)[source]

This is a smoothing function with a Gaussian kernel that does not require evenly spaced data and allows the Gaussian center to be shifted.

Parameters
  • x – array Dependent variable

  • y – array Independent variable

  • s – float Sigma of tailing side (same units as x)

  • lag – float Positive values shift the Gaussian kernel back in time to increase weight in the past: makes the result laggier. (same units as x)

  • leading_side_width_factor – float The leading side sigma will be this much bigger than the tailing side. Values > 1 increase the weight on data from the past, making the signal laggier. (unitless)

Returns

array Smoothed version of y

omfit_classes.omfit_elm.fft_smooth(xx, yy, s)[source]

Smooth by removing part of the spectrum in the frequency domain

  1. FFT

  2. Cut out part of the spectrum above some frequency

  3. Inverse FFT to get back to time domain. The result will be missing some of the high frequency variation.

Parameters
  • xx – 1D array Independent variable, such as time

  • yy – 1D array matching length of xx Dependent variable

  • s – float Smoothing timescale in units matching xx. The cutoff frequency is cut = 1/(s*pi)

Returns

1D array matching length of xx Smoothed (lowpass) version of yy

class omfit_classes.omfit_elm.OMFITelm(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict

Quickly detect ELMs and run a filter that will tell you which time-slices are okay and which should be rejected based on user specifications for ELM phase, etc.

Parameters
  • device – string Name of tokamak or MDS server

  • shot – int Shot number

  • detection_settings

    dict or string. If this is ‘default’ or {}, then default settings will be used. To change some of the settings from their machine-specific defaults, set to a dictionary that can contain:

    • default_filterscope_for_elms: string or list of strings giving filterscope(s) to use. Will be overridden by use_filterscopes keyword, if provided.

    • smoother: string giving name of smoothing function. Pick from:
      • gauss_smooth: Gaussian smoother

      • rc_smooth: RC lowpass filter, specifically implemented to mimic approximations used in the DIII-D PCS.

      • asym_gauss_smooth: Asymmetric Gaussian smoother with different sigma for past and future.

      • combo: Combination of rc_smooth and gauss_smooth

      • butter_smooth: Butterworth smoothing lowpass filter

      • fft_smooth: lowpass via FFT, cut out part of spectrum, inverse FFT

    • time_window: two element list of numbers giving ends of the time range to consider in ms

    • time_factor: float: factor to multiply into MDSplus times to get timebase in ms

    • jump_hold_max_dt: float Sets maximum dt between input data samples for the Jump&Hold method. Used to exclude slow data from old shots, which don’t work.

    • allow_fallback_when_dt_large: bool. If a basic test of data compatibility with the chosen method fails, a new method may be chosen if this flag is set.

    • hold_elm_flag_until_low_dalpha: flag for turning on extra logic to hold the during-ELM state until D_alpha drops to close to pre-ELM levels.

    • hold_elm_low_dalpha_threshold: float: Threshold as a fraction between pre-ELM min & during-ELM max

    • hold_elm_min_finding_time_window: float: Interval before ELM start to search for pre-ELM minimum (ms)

    • hold_elm_timeout’: float: Maximum time the ELM flag hold can persist (ms). The ELM can still be longer than this as determined by other rules, but the hold flag cannot extend the ELM past this total duration.

    • detection_method: int: 0 = classic edge detection style. 1 = new jump and hold strategy.

    • find_jump_time_window: float: Length of time window used to find nearby maximum in method 1 (ms)

    • find_jump_threshold: float: Threshold for normalized jump size in method 1 (unitless)

    • ****_tuning: dict where **** is the name of a smoothing function. Within this dict can be:
      • mild_smooth: (ms) Smoothing timescale for mild smooth

      • heavy_smooth_factor: Heavy will be this much stronger than mild

      • mild_smooth_derivative_factor: The deriv is of already smoothed data, but it may need a bit more.

      • heavy_smooth_derivative_factor: The derivative is smoothed again harder so there can be a diff.

      • threshold_on_difference_of_smooths: When mild-heavy is greater than this threshold, it must be during ELM (relative to max diff)

      • threshold_on_difference_of_smoothed_derivatives_plus: When mild(der)-heavy(der) is greater than this threshold, it must be on one of the edges of an ELM.

      • threshold_on_difference_of_smoothed_derivatives_minus: Same as previous, but heavy(der)-mild(der) instead.

      • d_thresh_enhance_factor: When the diff of derivs is positive but the less smoothed derivative is still negative, we’re in danger of getting a false positive, so make the threshold higher.

      • neg_d_thresh_enhance_factor: Same as before but for when mild(der)-heavy(der) is negative instead.

      • debounce: Number of samples in debouncing window

      • leading_side_width_factor: [asym_gauss_smooth ONLY]: how big is the asymmetry?

      • gauss_center_lag: [asym_gauss_smooth ONLY]: Shift center back in time, making the thing laggier. Negative value shifts forward in time, putting more weight on the future.

  • filter_settings

    dict or string If this is ‘default’ or {}, then default settings will be used. To change some of the settings from their machine-specific defaults, set to a dictionary that can contain:

    • elm_phase_range: A two element list or array containing the minimum and maximum acceptable values of ELM phase. ELM phase starts at 0 when an ELM ends and goes to 1 just before the next ELM begins, then flips to -1 and is negative during ELMs. ELM phase increases from -1 to 0 during an ELM before becoming positive again in the next inter-ELM period.

    • elm_since_reject_below: Data are rejected if the time since the last ELM is below this threshold. Useful for packs of closely spaced ELMs; a double spike in D-alpha could register as two ELMs with a very short inter-ELM period between them when phase would increase from 0 to 1 and a slice could be accepted, even if there hadn’t really been any time for ELM recovery. This setting is ignored if its value is <= -10 or if either end of the elm_phase_range is < 0.

    • elm_since_accept_above: Data are accepted if the time since the last ELM is above this threshold, regardless of ELM phase. Useful for analyzing shots that have a mixture of ELMing and non-ELMing phases. An ELM free period will normally be counted as one long inter-ELM period and it will take a long time for ELM phase to increase, which could lead to rejection of too many data. This setting overrides the ELM phase test to allow ELM-free periods to be included.

    • CER_entire_window_must_pass_ELM_filter: Relevant for CER where stime>0. If this is True, then the entire averaging window must be in the “good” part of the ELM phase. If this is False, only the middle has to be in the good part and also no ELMs are allowed to start during the time window in either case.

  • use_filterscopes – None, False, or input satisfying detection_settings -> default_filterscope_for_elms Easier-to-access override for default_filterscope_for_elms in detection_settings

  • attempt_sum_tdi_filterscopoes – bool Try to ask the server to sum D_alpha signals so that they don’t have to be interpolated and summed client side, and so that there will only be one signal transferred. Works sometimes, but not yet entirely reliable. It also doesn’t have an obvious speed benefit.

  • debug – bool Turn on spammier debug print or printd statements and also save intermediate data

  • debug_plots – bool Turn on debugging plots

  • on_failure – string What action should be done for failures? ‘exception’: raise an exception ‘pass’: return all ones from filter(); passing all data through the ELM filter (same as no ELM filtering)

  • mode – string ‘elm’: ELM detection (what this class was always meant to do) ‘sawtooth’: Sawtooth detection

  • quiet – bool Suppress some warnings and error messages which would often be useful.

lookup_latest_shot()[source]

Looks up the last shot. Works for DIII-D. Used automatically if shot <= 0, in which case the shot number is treated as relative to the last shot.

guess_time_factor(mdsvalue=None, mds_time_units=None, device=None)[source]

Tries to guess the time factor needed to convert to ms

Parameters
  • mdsvalue – OMFITmdsValue instance [optional] Used to obtain mds_time_units, if provided

  • mds_time_units

    string [optional] This string will be compared to common ways of representing various time units. If this is not provided and cannot be obtained, the guess will be based on device only. This setting is ignored if:

    • mdsvalue is provided

    • The device is one which is known to frequently have incorrect units logged in MDSplus.

  • device – string [optinal] Defaults to self.device. This shouldn’t have to be supplied except for testing or exotic applications.

Returns

float Factor needed to convert from time units to ms. Returns None if it cannot be determined.

get_ip(t)[source]

Gets plasma current on a specified timebase and stores it with other arrays

Parameters

t – 1D float array Time in ms

select_filterscope()[source]

Automatically pick which filterscope(s) or D_alpha measurement(s) to use as input to the ELM detection algorithm

get_omega_ce()[source]

Gathers equilibrium data and calculates local electron cyclotron emission frequency for use in mapping.

This is relevant for sawtooth mode only.

Returns

time (ms), rho (unitless), psi_N (unitless), omega_ce (radians/sec) time is 1D with length nt All others are 2D vs time and space with shape (nt, nx)

select_sawtooth_signal(rho_close_to_axis=0.1, rho_far_from_axis=0.4)[source]

Automatically chooses signals to use for sawtooth detection.

First: try to get within rho < rho_close_to_axis. Second: avoid cut-off. If there are no good channels (no cut-off) within rho <= rho_close_to_axis, get the closest channel in that’s not cut-off, but don’t look at rho >= rho_far_from_axis for it. For cut-off, use average density to estimate local density, based on the assumption that anything farther out than the top of the pedestal can be ignored anyway, and that density has a small gradient in the core. This might be okay.

There is no relativistic correction for frequency, which could lead to a small error in position which is deemed to be irrelevant for this application.

Parameters
  • rho_close_to_axis – float between 0 and 1 The first try is to find non-cut-off channels with rho <= rho_close_to_axis

  • rho_far_from_axis – float between 0 and 1 If no channels with rho <= rho_close_to_axis, find the closest non-cut-off channel, but only use it if its rho < rho_far_from_axis

Returns

list of strings Pointnames for good ECE channels to use in sawtooth detection

set_filterscope(fs)[source]

Change which filterscope(s) are used and delete any data which would become inconsistent with the new choice.

Parameters

fs – string or list of strings giving pointnames for filterscopes Should follow the rules for use_filterscopes keyword in __init__

set_detection_params(detection_settings)[source]

Updates ELM detection settings and clears out-of-date data, if any

Parameters

detection_settings – dictionary consistent with the detection_settings keyword in __init__

set_filter_params(filter_settings)[source]

Updates ELM filter settings so that subsequent calls to .filter() may give different results.

Parameters

filter_settings – dictionary consistent with the filter_settings keyword in __init___

get_dalpha()[source]

Gathers D_alpha for use in ELM_detection

check_detection_inputs(method=None, count=0)[source]

Makes sure the selected detection method is compatible with input data.

Raises OMFITelmIncompatibleData or changes method if a problem is found, depending on whether fallback is allowed.

This only catches very basic problems that can be found early in the process! It doesn’t abort after starting detection.

Parameters
  • method – int Detection method index

  • count – int Used to prevent infinite loops when the method updates more than once.

Returns

int If fallback is allowed, an invalid method may be updated to a new method and returned.

mask_remove_disruptions(t)[source]

Finds a mask that should be suitable for removing disruptions

Disruptions can come with a huge D_alpha flash that can ruin the scale of the normal ELMs

Parameters

t – 1D float array Relevant timebase in ms

Returns

1D bool array

detect(**kw)[source]

Chooses which ELM detection method to use and calls the appropriate function

Parameters

kw – keywords to pass to actual worker function

jump_and_hold_detect(calc_elm_freq=True)[source]

New detection algorithm. Focuses on finding just the start of each ELM (when D_alpha jumps up) at first, then forcing the ELM state to be held until D_alpha drops back down again.

Parameters

calc_elm_freq – bool

classic_edge_detect(report_profiling=False, calc_elm_freq=True)[source]

Uses a Difference of Smooths (generalized from difference of gaussians edge detection) scheme to find the edges of D_alpha bursts. It also runs a DoS scheme on the derivative of D_alpha w.r.t. time. This allows time points (in the D_alpha series) to be flagged as soon as the ELM starts: it doesn’t just get the tip, but instead it takes the whole thing.

Parameters
  • calc_elm_freq – bool Run ELM frequency calculation

  • report_profiling – bool Reports time taken to reach / time taken between checkpoints in the code. Used to identify bottlenecks and support optimization.

static sanitize_elm_flag(elm_flag)[source]

Force ELM flag to start and stop in non-ELM state

Parameters

elm_flag – array (bool-like 1s & 0s or Trues & Falses)

Returns

Sanitized ELM flag that follows the rules

elm_hold_correction(x, y, elm_flag2)[source]

Forces ELM flag to stay in ELMy state (set) until D_alpha level returns to low (close to pre-ELM) level

Parameters
  • x – 1D float array Time base for data

  • y – float array matching shape of x D_alpha measurements or similar signal

  • elm_flag2 – int array matching shape of x Flag for during-ELM state (1) vs. no-ELM state (0)

Returns

Adjusted ELM flag

calculate_elm_detection_quantities(x, elm_flag2, no_save=False, shot=None)[source]

Finish detection by compressing the ELM flag to 4 points per ELM and calculating quantities like ELM phase

Separate from .detect() to make room for different detection algorithms which will all finish the same way.

Could also be used for convenience with an externally calculated ELM flag by passing in your own ELM detection results (and using no_save=True to avoid disturbing results calculated by the class, if you like).

Parameters
  • x – float array Time base in ms

  • elm_flag2 – int array matching length of x Flag indicating when ELMs are happening (1) or not (0)

  • no_save – bool Disable saving of results; return them in a dictionary instead

  • shot – [optional] Only used in announcements. If None, self.shot will be used. If you are passing in some other stuff and don’t want inconsistent printd statements, you can pass in a different thing for shot. It would traditionally be an int, but it could be a string or something since it only gets printed here.

Returns

None or dict Returns results in a dict if no_save is set. Otherwise, results are just saved into the class and nothing is returned.

calc_frequency(method=None, plot_result=False)[source]

Calculates ELM frequency

Parameters
  • method – None or int None: use value in settings int: manual override: ID number of method you want to use - 0: very simple: 1 / local period - 1: simple w/ smooth - 2: process period then invert - 3: method 0 then interpolate and smooth

  • plot_result – bool Make a plot at the end

filter(times_to_filter, cer_mode=False, stimes=0.0, apply_elm_filter=True, debug=None, on_failure=None)[source]

Use ELM detector & ELM filtering settings to determine whether each element in an array of times is good or bad.

Parameters
  • times_to_filter – numeric 1d array or a list of such arrays For most signals (like Thomson scattering data): a single 1D array of times in ms. For CER: a list of 1D arrays, one for each CER channel.

  • cer_mode – bool Assume times_to_filter is a list of lists/arrays, one entry per channel (loop through elements of times, each one of which had better itself have several elements)

  • stimes – float -OR- array or list of arrays matching dimensions of times_to_filter Averaging time window for CER. This typically set to 2.5 ms, but it can vary (read it from your data if possible)

  • apply_elm_filter – bool Debugging option: if this is set False, the script will go through its pre-checks and setup without actually doing any ELM-filtering.

  • debug – bool [optional] Override class debug flag for just the filtering run. Leave at None to use self.debug instead of overriding.

  • on_failure – str [optional] Override class on_failure flag. Sets behavior when filtering is not possible. ‘pass’: Pass all data (elm_okay = 1 for all) ‘exception’: raise an exception. (default for unrecognized)

Returns

array of bools or list of arrays matching shape of times_to_filter Flag indicating whether each sample in times_to_filter is okay according to ELM filtering rules

plot(fig=None, axs=None)[source]

Launches default plot, which is normally self.elm_detection_plot(). Can be changed by setting self.default_plot.

Returns

(Figure instance, array of Axes instances) Tuple containing references to figure and axes used by the plot

elm_frequency_plot(time_range=None, time_range_for_mean=None, overlay=False, fig=None, axs=None, quiet=False, **plotkw)[source]

Plots calculated ELM frequency

Parameters
  • time_range – two element iterable with numbers The plot will initially zoom to this time range (ms).

  • time_range_for_mean – two element numeric, True, or False/None - True: time_range will be used to define the interval, then the mean ELM frequency will be calculated - False or None: no mean ELM frequency calculation - Two element numeric: mean ELM frequency will be calculated in this interval; can differ from time_range.

  • overlay – bool Overlay on existing figure instead of making a new one

  • fig – matplotlib Figure instance Figure to use when overlaying.

  • axs – List of at least two matplotlib Axes instances Axes to use when overlaying

  • quiet – bool Convert all print to printd

  • plotkw – Additional keywords are passed to pyplot.plot()

Returns

(Figure instance, array of Axes instances)

elm_detection_plot(**kw)[source]

Plots important variables related to ELM detection

Returns

(Figure instance, array of Axes instances)

get_quantity_names()[source]

Gives names of quantities like D_alpha or d/dt(T_e) and ELM vs sawtooth :return: tuple of strings containing:

  • Name of the raw Y variable (like D_alpha)

  • Name of the event being detected (like ELM or sawtooth) for use in middle of sentence

  • Name of event being detected (like ELM or Sawtooth) for use at start of sentence (capitalized 1st letter)

plot_phase(ax=None, standalone=True)[source]

Plots ELM phase

plot_signal_with_event_id(ax=None, wt=None, shade_elms=True, standalone=True)[source]

Plots the signal used for detection and shades or recolors intervals where events are detected :param ax: Axes instance :param wt: array for masking :param shade_elms: bool :param standalone: bool

plot_more_event_timing_info(ax, crop_elm_since=True, standalone=True)[source]

Plots details related to ELM/sawtooth/whatever timing, like time since the last event :param ax: Axes instance :param crop_elm_since: bool :param standalone: bool

plot_raw_sawtooth_info(ax=None, wt=None, standalone=True, shade_elms=True)[source]

Plots raw sawtooth signal (not -dTe/dt). Only makes sense in sawtooth mode.

Parameters
  • ax – Axes instance

  • wt – array of bools Mask for time range to plot

  • standalone – bool

  • shade_elms – bool

plot_hold_correction(ax=None, wt=None, time_range=None, standalone=True, shade_elms=True)[source]

Plot explanation of the hold correction to the ELM flag Only works if detection was done with debug = True :param ax: Axes instance :param wt: array of bools

time mask

Parameters
  • time_range – two element numeric iterable

  • standalone – bool

  • shade_elms – bool

jump_and_hold_elm_detection_plot(time_zoom=None, crop_data_to_zoom_range=True, crop_elm_since=True, show_phase=True, show_more_timing=True, show_details=True, hide_y=False, hide_numbers=False, legend_outside=False, notitles=False, shade_elms=True, figsize=None, fig=None, axs=None)[source]

Plots important variables related to jump & hold ELM detection

This both demonstrates how the ELM detector works and serves as a diagnostic plot that can help with tuning.

Parameters
  • time_zoom – two element iterable containing numbers Zoom in to this time range in ms If None, auto selects the default for the current device

  • crop_data_to_zoom_range – bool Crop data to range given by time_zoom before calling plot. This can prevent resampling, which will make the plot better.

  • crop_elm_since – float or bool float = plot max in ms. True = auto-scale sensibly. False = auto-scale stupidly.

  • show_phase – bool Plot ELM phase in an additional subplot. Phase is 0 at the end of an ELM, then increases during the inter-ELM period until it reaches +1 at the start of the next ELM. Then it jumps to -1 and increases back to 0 during the ELM.

  • show_more_timing – bool Include an additional subplot showing more timing details like time since last ELM & local ELM period length

  • show_details – bool Shows details of how ELM detection works (individual terms like DS, DD, etc.) in a set of add’l subplots.

  • hide_y – bool Turns off numbers on the y-axis and in some legend entries. Useful if you want to consider D_alpha to be in arbitrary units and don’t want to be distracted by numbers.

  • hide_numbers – bool Hides numerical values of smoothing time scales and other settings used in ELM detection. Useful if you want shorter legend entries or a simpler looking plot.

  • legend_outside – bool Place the legends outside of the plots; useful if you have the plots positioned so there is empty space to the right of them.

  • notitles – bool Suppress titles on the subplots.

  • shade_elms – bool On the ELM ID plot, shade between 0 and D_alpha

  • figsize – Two element iterable containing numbers (X, Y) Figure size in cm

  • fig – Figure instance Used for overlay

  • axs – List of Axes instances Used for overlay. Must be at least long enough to accommodate the number of plots ordered.

Returns

(Figure instance, array of Axes instances)

classic_elm_detection_plot(time_zoom=None, crop_data_to_zoom_range=True, crop_elm_since=True, show_phase=True, show_more_timing=True, show_details=True, hide_y=False, hide_numbers=False, legend_outside=False, notitles=False, shade_elms=True, figsize=None, fig=None, axs=None)[source]

Plots important variables related to classic ELM detection

This both demonstrates how the ELM detector works and serves as a diagnostic plot that can help with tuning.

Parameters
  • time_zoom – two element iterable containing numbers Zoom in to this time range in ms If None, auto selects the default for the current device

  • crop_data_to_zoom_range – bool Crop data to range given by time_zoom before calling plot. This can prevent resampling, which will make the plot better.

  • crop_elm_since – float or bool float = plot max in ms. True = auto-scale sensibly. False = auto-scale stupidly.

  • show_phase – bool Plot ELM phase in an additional subplot. Phase is 0 at the end of an ELM, then increases during the inter-ELM period until it reaches +1 at the start of the next ELM. Then it jumps to -1 and increases back to 0 during the ELM.

  • show_more_timing – bool Include an additional subplot showing more timing details like time since last ELM & local ELM period length

  • show_details – bool Shows details of how ELM detection works (individual terms like DS, DD, etc.) in a set of add’l subplots.

  • hide_y – bool Turns off numbers on the y-axis and in some legend entries. Useful if you want to consider D_alpha to be in arbitrary units and don’t want to be distracted by numbers.

  • hide_numbers – bool Hides numerical values of smoothing time scales and other settings used in ELM detection. Useful if you want shorter legend entries or a simpler looking plot.

  • legend_outside – bool Place the legends outside of the plots; useful if you have the plots positioned so there is empty space to the right of them.

  • notitles – bool Suppress titles on the subplots.

  • shade_elms – bool On the ELM ID plot, shade between 0 and D_alpha

  • figsize – Two element iterable containing numbers (X, Y) Figure size in cm

  • fig – Figure instance Used for overlay

  • axs – List of Axes instances Used for overlay. Must be at least long enough to accommodate the number of plots ordered.

Returns

(Figure instance, array of Axes instances)

omfit_environment

class omfit_classes.omfit_environment.OMFITenv(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict

This class is used to retrieve and parse environmental variables on a server

Parameters
  • server – server from which to retrieve the environmental variables

  • tunnel – tunnel to reach the server

  • loadStartupFiles – whether the user startup files should be parsed

  • string – string to be parsed instead of conneting to a server

omfit_eped

omfit_classes.omfit_eped.eped_nn_fann(a, betan, bt, delta, ip, kappa, m, neped, r, zeffped, z=1, zi=6, mi=12, solution='H', model='multiroot')[source]

Routine that returns results of the EPED1-NN model

Parameters
  • a – scalar or array of minor radius [m]

  • betan – scalar or array of beta_n

  • bt – scalar or array of toroidal magnetic field [T]

  • delta – scalar or array of triangularity

  • ip – scalar or array of plasma current [MA]

  • kappa – scalar or array of elongation

  • m – scalar or array of ion mass (2 for D and 2.5 for DT)

  • neped – scalar or array of density at pedestal [1E19 m^-3]

  • r – scalar or array of major radius [m]

  • zeffped – scalar or array of effective ion charge at pedestal

  • z – scalar or array of ion charge (only used for calculating Te from pressure)

  • mi – scalar or array of impurity ion mass (only used for calculating Te from pressure)

  • Zi – scalar or array of impurity ion charge (only used for calculating Te from pressure)

  • solution – ‘H’ or ‘superH’

Returns

scalars or arrays of electron temperature in keV and pedestal width as fraction of psi

omfit_classes.omfit_eped.eped_nn_tf(a, betan, bt, delta, ip, kappa, m, neped, r, zeffped, z=1, zi=6, mi=12, solution='H', diamag='GH', model='eped1nn/models/EPED_mb_128_pow_norm_common_30x10.pb')[source]

Routine that returns results of the EPED1-NN model

Parameters
  • a – scalar or array of minor radius [m]

  • betan – scalar or array of beta_n

  • bt – scalar or array of toroidal magnetic field [T]

  • delta – scalar or array of triangularity

  • ip – scalar or array of plasma current [MA]

  • kappa – scalar or array of elongation

  • m – scalar or array of ion mass (2 for D and 2.5 for DT)

  • neped – scalar or array of density at pedestal [1E19 m^-3]

  • r – scalar or array of major radius [m]

  • zeffped – scalar or array of effective ion charge at pedestal

  • z – scalar or array of ion charge (only used for calculating Te from pressure)

  • mi – scalar or array of impurity ion mass (only used for calculating Te from pressure)

  • Zi – scalar or array of impurity ion charge (only used for calculating Te from pressure)

  • solution – ‘H’ or ‘superH’

  • diamag – diamagnetic stabilization model ‘GH’ or ‘G’ or ‘H’

  • model – string to select the EPED1NN model

Returns

scalars or arrays of electron temperature in keV and pedestal width as fraction of psi

omfit_classes.omfit_eped.eped_nn(a, betan, bt, delta, ip, kappa, m, neped, r, zeffped, z=1, zi=6, mi=12, solution='H', model='multiroot')

Routine that returns results of the EPED1-NN model

Parameters
  • a – scalar or array of minor radius [m]

  • betan – scalar or array of beta_n

  • bt – scalar or array of toroidal magnetic field [T]

  • delta – scalar or array of triangularity

  • ip – scalar or array of plasma current [MA]

  • kappa – scalar or array of elongation

  • m – scalar or array of ion mass (2 for D and 2.5 for DT)

  • neped – scalar or array of density at pedestal [1E19 m^-3]

  • r – scalar or array of major radius [m]

  • zeffped – scalar or array of effective ion charge at pedestal

  • z – scalar or array of ion charge (only used for calculating Te from pressure)

  • mi – scalar or array of impurity ion mass (only used for calculating Te from pressure)

  • Zi – scalar or array of impurity ion charge (only used for calculating Te from pressure)

  • solution – ‘H’ or ‘superH’

Returns

scalars or arrays of electron temperature in keV and pedestal width as fraction of psi

omfit_eqdsk

omfit_classes.omfit_eqdsk.read_basic_eq_from_mds(device='DIII-D', shot=None, tree='EFIT01', quiet=False, toksearch_mds=None, **kw)[source]

Read basic equilibrium data from MDSplus This is a lightweight function for reading simple data from all EFIT slices at once without making g-files.

Parameters
  • device – str The tokamak that the data correspond to (‘DIII-D’, ‘NSTX’, etc.)

  • server – str [Optional, special purpose] MDSplus server to draw data from. Use this if you are connecting to a server that is not recognized by the tokamak() command, like vidar, EAST_US, etc. If this is None, it will be copied from device.

  • shot – int Shot number from which to read data

  • tree – str Name of the MDSplus tree to connect to, like ‘EFIT01’, ‘EFIT02’, ‘EFIT03’, …

  • g_file_quantities – list of strings Quantities to read from the sub-tree corresponding with the EFIT g-file. Example: [‘r’, ‘z’, ‘rhovn’]

  • a_file_quantities – list of strings Quantities to read from the sub-tree corresponding with the EFIT a-file. Example: [‘area’]

  • measurements – list of strings Quantities to read from the MEASUREMENTS tree. Example: [‘fccurt’]

  • derived_quantities – list of strings Derived quantities to be calculated and returned. This script understands a limited set of simple calculations: ‘time’, ‘psin’, ‘psin1d’ Example: [‘psin’, ‘psin1d’, ‘time’]

  • other_results – list of strings Other quantities to be gathered from the parent tree that holds gEQDSK and aEQDSK. Example: [‘DATE_RUN’]

  • quiet – bool

  • get_all_meas – bool Fetch measurement signals according to its time basis which includes extra time slices that failed to fit. The time ‘axis’ will be avaliabe in [‘mtimes’]

  • toksearch_mds – OMFITtoksearch instance An already fetched and loaded OMFITtoksearch object, expected to have fetched all of the signals for the mdsValues in this file.

  • allow_shot_tree_translation – bool Allow the real shot and tree to be translated to the fake shot stored in the EFIT tree

Returns

dict

omfit_classes.omfit_eqdsk.from_mds_plus(device=None, shot=None, times=None, exact=False, snap_file='EFIT01', time_diff_warning_threshold=10, fail_if_out_of_range=True, get_afile=True, get_mfile=False, show_missing_data_warnings=None, debug=False, quiet=False, close=False)[source]

Gathers EFIT data from MDSplus, interpolates to the desired times, and creates a set of g/a/m-files from the results.

Links to EFIT documentation::

https://fusion.gat.com/theory/Efit Home https://fusion.gat.com/theory/Efitiofiles List of input/output files https://fusion.gat.com/theory/Efitin1 IN1 namelist description (primary input)

Parameters
  • device – string Name of the tokamak or MDSserver from whence cometh the data.

  • shot – int Shot for which data are to be gathered.

  • times – numeric iterable Time slices to gather in ms, even if working with an MDS server that normally operates in seconds.

  • exact – bool Fail instead of interpolating if the exact time-slices are not available.

  • snap_file – string Description of which EFIT tree to gather from.

  • time_diff_warning_threshold – float Issue a warning if the difference between a requested time slice and the closest time slice in the source EFIT exceeds this threshold.

  • fail_if_out_of_range – bool Skip requested times that fail the above threshold test.

  • get_afile – bool gather A-file quantities as well as G-file quantities.

  • get_mfile – bool gather M-file quantities as well as G-file quantities.

  • show_missing_data_warnings

    bool 1 or True: Print a warning for each missing item when setting it to a default value.

    May not be necessary because some things in the a-file don’t seem very important and are always missing from MDSplus.

    2 or “once”: print warning messages for missing items if the message would be unique.

    Don’t repeat warnings about the same missing quanitty for subsequent time-slices.

    0 or False: printd instead (use quiet if you really don’t want it to print anything)

    None: select based on device. Most devices should default to ‘once’.

  • debug – bool Save intermediate results to the tree for inspection.

  • quiet – bool

  • close – bool Close each file at each time before going on to the next time

Returns

a dictionary containing a set of G-files in another dictioanry named ‘gEQDSK’, and, optionally, a set of A-files under ‘aEQDSK’ and M-filess under ‘mEQDSK’

class omfit_classes.omfit_eqdsk.OMFIT_pcs_shape(*args, **kwargs)[source]

Bases: omfit_classes.omfit_ascii.OMFITascii, omfit_classes.sortedDict.SortedDict

load()[source]
omfit_classes.omfit_eqdsk.read_basic_eq_from_toksearch(device='DIII-D', server=None, shots=None, tree='EFIT01', quiet=False, g_file_quantities=['r', 'z', 'rhovn'], a_file_quantities=['area'], derived_quantities=['psin', 'psin1d', 'time'], measurements=['fccurt'], other_results=['DATE_RUN'])[source]

Improve accuracy of X-point coordinates by upsampling a region of the fluxmap around the initial estimate

Needs some sort of initial estimate to define a search region

Parameters
  • rgrid – 1d float array R coordinates of the grid

  • zgrid – 1d float array Z coordinates of the grid

  • psigrid – 2d float array psi values corresponding to rgrid and zgrid

  • r_center – float Center of search region in r; units should match rgrid. Defaults to result of x_point_quick_search()

  • z_center – float Center of the search region in z.

  • dr – float Half width of the search region in r. Defaults to about 5 grid cells.

  • dz – Half width of the search region in z. Defaults to about 5 grid cells.

  • zoom – int Scaling factor for upsample

  • hardfail – bool Raise an exception on failure

  • kw – additional keywords passed to x_point_quick_search r_center and z_center are not given.

Returns

two element float array Higher quality estimate for the X-point R,Z coordinates with units matching rgrid

Make a quick and dirty estimate for x-point position to guide higher quality estimation

The goal is to identify the primary x-point to within a grid cell or so

Parameters
  • rgrid – 1d float array R coordinates of the grid

  • zgrid – 1d float array Z coordinates of the grid

  • psigrid – 2d float array psi values corresponding to rgrid and zgrid

  • psi_boundary – float [optional] psi value on the boundary; helps distinguish the primary x-point from other field nulls If this is not provided, you may get the wrong x-point.

  • psi_boundary_weight – float Sets the relative weight of matching psi_boundary compared to minimizing B_pol. 1 gives ~equal weight after normalizing Delta psi by grid spacing and r (to make it comparable to B_pol in the first place) 10 gives higher weight to psi_boundary, which might be nice if you keep locking onto the secondary x-point. Actually, it seems like the outcome isn’t very sensitive to this weight. psi_boundary is an adequate tie breaker between two B_pol nulls with weights as low as 1e-3 for some cases, and it’s not strong enough to move the quick estiamte to a different grid cell on a 65x65 with weights as high as 1e2. Even then, the result is still close enough to the True X-point that the higher quality algorithm can find the same answer. So, just leave this at 1.

  • zsign – int If you know the X-point you want is on the top or the bottom, you can pass in 1 or -1 to exclude the wrong half of the grid.

Returns

two element float array Low quality estimate for the X-point R,Z coordinates with units matching rgrid

omfit_classes.omfit_eqdsk.gEQDSK_COCOS_identify(bt, ip)[source]

Returns the native COCOS that an unmodified gEQDSK would obey, defined by sign(Bt) and sign(Ip) In order for psi to increase from axis to edge and for q to be positive: All use sigma_RpZ=+1 (phi is counterclockwise) and exp_Bp=0 (psi is flux/2.*pi) We want sign(psi_edge-psi_axis) = sign(Ip)*sigma_Bp > 0 (psi always increases in gEQDSK) sign(q) = sign(Ip)*sign(Bt)*sigma_rhotp > 0 (q always positive in gEQDSK)

============================================
Bt    Ip    sigma_Bp    sigma_rhotp    COCOS
============================================
+1    +1       +1           +1           1
+1    -1       -1           -1           3
-1    +1       +1           -1           5
-1    -1       -1           +1           7
omfit_classes.omfit_eqdsk.OMFITeqdsk(filename, EFITtype=None, **kw)[source]

Automatically determine the type of an EFIT file and parse it with the appropriate class. It is faster to just directly use the appropriate class. Using the right class also avoids problems because some files technically can be parsed with more than one class (no exceptions thrown), giving junk results.

Parameters
  • filename – string Name of the file on disk, including path

  • EFITtype – string Letter giving the type of EFIT file, like ‘g’. Should be in ‘gamks’. If None, then the first letter in the filename is used to determine the file type If this is also not helping, then a brute-force load is attempted

  • strict – bool Filename (not including path) must include the letter giving the file type. Prevents errors like using sEQDSK to parse g133221.01000, which might otherwise be possible.

  • **kw – Other keywords to pass to the class that is chosen.

Returns

OMFIT*eqdsk instance

class omfit_classes.omfit_eqdsk.OMFITaeqdsk(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

class used to interface A files generated by EFIT

Parameters
  • filename – filename passed to OMFITascii class

  • **kw – keyword dictionary passed to OMFITascii class

load(**kw)[source]

Method used to read a-files

save()[source]

Method used to write a-files

from_mdsplus(device=None, shot=None, time=None, exact=False, SNAPfile='EFIT01', time_diff_warning_threshold=10, fail_if_out_of_range=True, show_missing_data_warnings=None, quiet=False)[source]

Fill in aEQDSK data from MDSplus

Parameters
  • device – The tokamak that the data correspond to (‘DIII-D’, ‘NSTX’, etc.)

  • shot – Shot number from which to read data

  • time – time slice from which to read data

  • exact – get data from the exact time-slice

  • SNAPfile – A string containing the name of the MDSplus tree to connect to, like ‘EFIT01’, ‘EFIT02’, ‘EFIT03’, …

  • time_diff_warning_threshold – raise error/warning if closest time slice is beyond this treshold

  • fail_if_out_of_range – Raise error or warn if closest time slice is beyond time_diff_warning_threshold

  • show_missing_data_warnings – Print warnings for missing data 1 or True: display with printw 2 or ‘once’: only print the first time 0 or False: display all but with printd instead of printw None: select based on device. Most will chose ‘once’.

  • quiet – verbosity

Returns

self

add_aeqdsk_documentation()[source]
class omfit_classes.omfit_eqdsk.OMFITgeqdsk(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

class used to interface G files generated by EFIT

Parameters
  • filename – filename passed to OMFITascii class

  • **kw – keyword dictionary passed to OMFITascii class

transform_signals = {'BCENTR': 'BT', 'CURRENT': 'IP', 'FFPRIM': 'dPSI', 'FPOL': 'BT', 'PPRIME': 'dPSI', 'PSIRZ': 'PSI', 'QPSI': 'Q', 'SIBRY': 'PSI', 'SIMAG': 'PSI'}
surface_integral(*args, **kw)[source]

Cross section integral of a quantity

Parameters

what – quantity to be integrated specified as array at flux surface

Returns

array of the integration from core to edge

volume_integral(*args, **kw)[source]

Volume integral of a quantity

Parameters

what – quantity to be integrated specified as array at flux surface

Returns

array of the integration from core to edge

surfAvg(Q, interp='linear')[source]

Flux surface averaging of a quantity at each flux surface

Parameters
  • Q – 2D quantity to do the flux surface averaging (either 2D array or string from ‘AuxQuantities’, e.g. RHORZ)

  • interp – interpolation method [‘linear’,’quadratic’,’cubic’]

Returns

array of the quantity fluxs surface averaged for each flux surface

>> OMFIT[‘test’]=OMFITgeqdsk(OMFITsrc+”/../samples/g133221.01000”) >> jpar=OMFIT[‘test’].surfAvg(‘Jpar’) >> pyplot.plot(OMFIT[‘test’][‘rhovn’],jpar)

property cocos

Return COCOS of current gEQDSK as represented in memory

load(raw=False, add_aux=True)[source]

Method used to read g-files :param raw: bool

load gEQDSK exactly as it’s on file, regardless of COCOS

Parameters

add_aux – bool Add AuxQuantities and fluxSurfaces when using raw mode. When not raw, these will be loaded regardless.

save(raw=False)[source]

Method used to write g-files

Parameters

raw – save gEQDSK exactly as it’s in the the tree, regardless of COCOS

cocosify(cocosnum, calcAuxQuantities, calcFluxSurfaces, inplace=True)[source]

Method used to convert gEQDSK quantities to desired COCOS

Parameters
  • cocosnum – desired COCOS number (1-8, 11-18)

  • calcAuxQuantities – add AuxQuantities based on new cocosnum

  • calcFluxSurfaces – add fluxSurfaces based on new cocosnum

  • inplace – change values in True: current gEQDSK, False: new gEQDSK

Returns

gEQDSK with proper cocos

native_cocos()[source]

Returns the native COCOS that an unmodified gEQDSK would obey, defined by sign(Bt) and sign(Ip) In order for psi to increase from axis to edge and for q to be positive: All use sigma_RpZ=+1 (phi is counterclockwise) and exp_Bp=0 (psi is flux/2.*pi) We want sign(psi_edge-psi_axis) = sign(Ip)*sigma_Bp > 0 (psi always increases in gEQDSK) sign(q) = sign(Ip)*sign(Bt)*sigma_rhotp > 0 (q always positive in gEQDSK)

============================================
Bt    Ip    sigma_Bp    sigma_rhotp    COCOS
============================================
+1    +1       +1           +1           1
+1    -1       -1           -1           3
-1    +1       +1           -1           5
-1    -1       -1           +1           7
flip_Bt_Ip()[source]

Flip direction of the magnetic field and current without changing COCOS

flip_ip()[source]

Flip sign of IP and related quantities without changing COCOS

flip_bt()[source]

Flip sign of BT and related quantities without changing COCOS

bateman_scale(BCENTR=None, CURRENT=None)[source]
Scales toroidal field and current in such a way as to hold poloidal beta constant,

keeping flux surface geometry unchanged

  • The psi, p’, and FF’ are all scaled by a constant factor to achieve the desired current

  • The edge F=R*Bt is changed to achieve the desired toroidal field w/o affecting FF’

  • Scaling of other quantities follow from this

The result is a valid Grad-Shafranov equilibrium (if self is one)

Based on the scaling from Bateman and Peng, PRL 38, 829 (1977) https://link.aps.org/doi/10.1103/PhysRevLett.38.829

combineGEQDSK(other, alpha)[source]

Method used to linearly combine current equilibrium (eq1) with other g-file All quantities are linearly combined, except ‘RBBBS’,’ZBBBS’,’NBBBS’,’LIMITR’,’RLIM’,’ZLIM’,’NW’,’NH’ OMFIT[‘eq3’]=OMFIT[‘eq1’].combineGEQDSK(OMFIT[‘eq2’],alpha) means: eq3=alpha*eq1+(1-alpha)*eq2

Parameters
  • other – g-file for eq2

  • alpha – linear combination parameter

Returns

g-file for eq3

addAuxNamelist()[source]

Adds [‘AuxNamelist’] to the current object

Returns

Namelist object containing auxiliary quantities

delAuxNamelist()[source]

Removes [‘AuxNamelist’] from the current object

addAuxQuantities()[source]

Adds [‘AuxQuantities’] to the current object

Returns

SortedDict object containing auxiliary quantities

fourier(surface=1.0, nf=128, symmetric=True, resolution=2, **kw)[source]

Reconstructs Fourier decomposition of the boundary for fixed boundary codes to use

Parameters
  • surface – Use this normalised flux surface for the boundary (if <0 then original gEQDSK BBBS boundary is used), else the flux surfaces are from FluxSurfaces.

  • nf – number of Fourier modes

  • symmetric – return symmetric boundary

  • resolution – FluxSurfaces resolution factor

  • **kw – additional keyword arguments are passed to FluxSurfaces.findSurfaces

addFluxSurfaces(**kw)[source]

Adds [‘fluxSurface’] to the current object

Parameters

**kw – keyword dictionary passed to fluxSurfaces class

Returns

fluxSurfaces object based on the current gEQDSK file

calc_masks()[source]

Calculate grid masks for limiters, vessel, core and edge plasma

Returns

SortedDict object with 2D maps of masks

plot(usePsi=False, only1D=False, only2D=False, top2D=False, q_contour_n=0, label_contours=False, levels=None, mask_vessel=True, show_limiter=True, xlabel_in_legend=False, useRhop=False, **kw)[source]

Function used to plot g-files. This plot shows flux surfaces in the vessel, pressure, q profiles, P’ and FF’

Parameters
  • usePsi – In the plots, use psi instead of rho, or both

  • only1D – only make plofile plots

  • only2D – only make flux surface plot

  • top2D – Plot top-view 2D cross section

  • q_contour_n – If above 0, plot q contours in 2D plot corresponding to rational surfaces of the given n

  • label_contours – Adds labels to 2D contours

  • levels – list of sorted numeric values to pass to 2D plot as contour levels

  • mask_vessel – mask contours with vessel

  • show_limiter – Plot the limiter outline in (R,Z) 2D plots

  • xlabel_in_legend – Show x coordinate in legend instead of under axes (usefull for overplots with psi and rho)

  • label – plot item label to apply lines in 1D plots (only the q plot has legend called by the geqdsk class itself) and to the boundary contour in the 2D plot (this plot doesn’t call legend by itself)

  • ax – Axes instance to plot in when using only2D

  • **kw – Standard plot keywords (e.g. color, linewidth) will be passed to Axes.plot() calls.

get2D(Q, r, z, interp='linear')[source]

Function to retrieve 2D quantity at coordinates

Parameters
  • Q – Quantity to be retrieved (either 2D array or string from ‘AuxQuantities’, e.g. RHORZ)

  • r – r coordinate for retrieval

  • z – z coordinate for retrieval

  • interp – interpolation method [‘linear’,’quadratic’,’cubic’]

>> OMFIT[‘test’]=OMFITgeqdsk(OMFITsrc+”/../samples/g133221.01000”) >> r=np.linspace(min(OMFIT[‘test’][‘RBBBS’]),max(OMFIT[‘test’][‘RBBBS’]),100) >> z=r*0 >> tmp=OMFIT[‘test’].get2D(‘Br’,r,z) >> pyplot.plot(r,tmp)

map2D(x, y, X, interp='linear', maskName='core_plasma_mask', outsideOfMask=nan)[source]

Function to map 1D quantity to 2D grid

Parameters
  • x – abscissa of 1D quantity

  • y – 1D quantity

  • X – 2D distribution of 1D quantity abscissa

  • interp – interpolation method [‘linear’,’cubic’]

  • maskName – one among limiter_mask, vessel_mask, core_plasma_mask, edge_plasma_mask or None

  • outsideOfMask – value to use outside of the mask

calc_pprime_ffprim(press=None, pprime=None, Jt=None, Jt_over_R=None, fpol=None)[source]

This method returns the P’ and FF’ given P or P’ and J or J/R based on the current equilibrium fluxsurfaces geometry

Parameters
  • press – pressure

  • pprime – pressure*pressure’

  • Jt – toroidal current

  • Jt_over_R – flux surface averaged toroidal current density over major radius

  • fpol – F

Returns

P’, FF’

calc_Ip(Jt_over_R=None)[source]

This method returns the toroidal current within the flux surfaces based on the current equilibrium fluxsurfaces geometry

Parameters

Jt_over_R – flux surface averaged toroidal current density over major radius

Returns

Ip

add_rhovn()[source]

Calculate RHOVN from PSI and q profile

case_info()[source]

Interprets the CASE field of the GEQDSK and converts it into a dictionary

Returns

dict

Contains as many values as can be determined. Fills in None when the correct value cannot be determined.

device shot time (within shot) date (of code execution) efitid (aka snap file or tree name) code_version

to_omas(ods=None, time_index=0, allow_derived_data=True)[source]

translate gEQDSK class to OMAS data structure

Parameters
  • ods – input ods to which data is added

  • time_index – time index to which data is added

  • allow_derived_data – bool Allow data to be drawn from fluxSurfaces, AuxQuantities, etc. May trigger dynamic loading.

Returns

ODS

from_omas(ods, time_index=0, profiles_2d_index=0, time=None)[source]

translate OMAS data structure to gEQDSK

Parameters
  • time_index – time index to extract data from

  • profiles_2d_index – index of profiles_2d to extract data from

  • time – time in seconds where to extract the data (if set it superseeds time_index)

Returns

self

resample(nw_new)[source]

Change gEQDSK resolution NOTE: This method operates in place

Parameters

nw_new – new grid resolution

Returns

self

downsample_limiter(max_lim=None, in_place=True)[source]

Downsample the limiter

Parameters
  • max_lim – If max_lim is specified and the number of limiter points - before downsampling is smaller than max_lim, then no downsampling is performed after downsampling is larger than max_lim, then an error is raised

  • in_place – modify this object in place or not

Returns

downsampled rlim and zlim arrays

downsample_boundary(max_bnd=None, in_place=True)[source]

Downsample the boundary

Parameters
  • max_bnd – If max_bnd is specified and the number of boundary points - before downsampling is smaller than max_bnd, then no downsampling is performed - after downsampling is larger than max_bnd, then an error is raised

  • in_place – modify this object in place or not

Returns

downsampled rbnd and zbnd arrays

from_mdsplus(device=None, shot=None, time=None, exact=False, SNAPfile='EFIT01', time_diff_warning_threshold=10, fail_if_out_of_range=True, show_missing_data_warnings=None, quiet=False)[source]

Fill in gEQDSK data from MDSplus

Parameters
  • device – The tokamak that the data correspond to (‘DIII-D’, ‘NSTX’, etc.)

  • shot – Shot number from which to read data

  • time – time slice from which to read data

  • exact – get data from the exact time-slice

  • SNAPfile – A string containing the name of the MDSplus tree to connect to, like ‘EFIT01’, ‘EFIT02’, ‘EFIT03’, …

  • time_diff_warning_threshold – raise error/warning if closest time slice is beyond this treshold

  • fail_if_out_of_range – Raise error or warn if closest time slice is beyond time_diff_warning_threshold

  • show_missing_data_warnings – Print warnings for missing data 1 or True: yes, print the warnings 2 or ‘once’: print only unique warnings; no repeats for the same quantities missing from many time slices 0 or False: printd instead of printw None: select based on device. Most will chose ‘once’.

  • quiet – verbosity

Returns

self

from_rz(r, z, psival, p, f, q, B0, R0, ip, resolution, shot=0, time=0, RBFkw={})[source]

Generate gEQDSK file from r, z points

Parameters
  • r – 2D array with R coordinates with 1st dimension being the flux surface index and the second theta

  • z – 2D array with Z coordinates with 1st dimension being the flux surface index and the second theta

  • psival – 1D array with psi values

  • p – 1D array with pressure values

  • f – 1D array with fpoloidal values

  • q – 1D array with safety factor values

  • B0 – scalar vacuum B toroidal at R0

  • R0 – scalar R where B0 is defined

  • ip – toroidal current

  • resolution – g-file grid resolution

  • shot – used to set g-file string

  • time – used to set g-file string

  • RBFkw – keywords passed to internal Rbf interpolator

Returns

self

from_uda(shot=99999, time=0.0, pfx='efm', device='MAST')[source]

Read in data from Unified Data Access (UDA)

Parameters
  • shot – shot number to read in

  • time – time to read in data

  • pfx – UDA data source prefix e.g. pfx+’_psi’

  • device – tokamak name

from_uda_mastu(shot=99999, time=0.0, device='MAST', pfx='epm')[source]

Read in data from Unified Data Access (UDA) for MAST-U

Parameters
  • shot – shot number to read in

  • time – time to read in data

  • device – tokamak name

  • pfx – equilibrium type

from_ppf(shot=99999, time=0.0, dda='EFIT', uid='jetppf', seq=0, device='JET')[source]

Read in data from JET PPF

Parameters
  • shot – shot number to read in

  • time – time to read in data

  • dda – Equilibrium source diagnostic data area

  • uid – Equilibrium source user ID

  • seq – Equilibrium source sequence number

from_efitpp(ncfile=None, shot=99999, time=0.0, device='MAST', pfx=None)[source]

Read in data from EFIT++ netCDF

Parameters
  • filenc – EFIT++ netCDF file

  • shot – shot number to read in

  • time – time to read in data

from_efitpp_mastu(ncfile=None, shot=99999, time=0.0, device='MAST', pfx=None)[source]

Read in data from EFIT++ netCDF

Parameters
  • filenc – EFIT++ netCDF file

  • shot – shot number to read in

  • time – time to read in data

  • device – machine

  • pfx – equilibrium type

from_aug_sfutils(shot=None, time=None, eq_shotfile='EQI', ed=1)[source]

Fill in gEQDSK data from aug_sfutils, which processes magnetic equilibrium results from the AUG CLISTE code.

Note that this function requires aug_sfutils to be locally installed (pip install aug_sfutils will do). Users also need to have access to the AUG shotfile system.

Parameters
  • shot – AUG shot number from which to read data

  • time – time slice from which to read data

  • eq_shotfile – equilibrium reconstruction to fetch (EQI, EQH, IDE, …)

  • ed – edition of the equilibrium reconstruction shotfile

Returns

self

add_geqdsk_documentation()[source]
class omfit_classes.omfit_eqdsk.OMFITkeqdsk(*args, **kwargs)[source]

Bases: omfit_classes.omfit_namelist.OMFITnamelist

class used to interface with K files used by EFIT

Parameters
  • filename – filename passed to OMFITascii class

  • **kw – keyword dictionary passed to OMFITascii class

linecycle = <itertools.cycle object>
markercycle = <itertools.cycle object>
load(*args, **kw)[source]

Method used to load the content of the file specified in the .filename attribute

Returns

None

save(*args, **kw)[source]

Method used to save the content of the object to the file specified in the .filename attribute

Returns

None

remove_duplicates(keep_first_or_last='first', update_original=True, make_new_copy=False)[source]

Searches through all the groups in the k-file namelist (IN1, INS,EFITIN, etc.) and deletes duplicated variables. You can keep the first instance or the last instance.

Parameters
  • keep_first_or_last – string (‘first’ or ‘last’) - If there are duplicates, only one can be kept. Should it be the first one or the last one?

  • update_original – bool Set False to leave the original untouched during testing. Use with make_new_copy.

  • make_new_copy – bool Create a copy of the OMFITkeqdsk instance and return it. Useful if the original is not being modified.

Returns

None or OMFITkeqdsk instance (depending on make_new_copy)

addAuxQuantities()[source]

Adds [‘AuxQuantities’] to the current object

Returns

SortedDict object containing auxiliary quantities

get_weights(fitweights, ishot=None)[source]
from_efitin()[source]
from_omas(ods, time_index=0, time=None)[source]

Generate kEQDSK from OMAS data structure.

Currently this fuction just writes from

code_parameters. In the future, parameters including ITIME,PLASMA,EXPMP2,COILS,BTOR,DENR,DENV, SIREF,BRSP,ECURRT,VLOOP,DFLUX,SIGDLC,CURRC79,CURRC139,CURRC199,CURRIU30,CURRIU90,CURRIU150, CURRIL30,CURRIL90, CURRIL150 should be specified from ods raw parameters.

Parameters
  • ods – input ods from which data is added

  • time_index – time index from which data is added to ods

  • time – time in seconds where to compare the data (if set it superseeds time_index)

Returns

ODS

to_omas(ods=None, time_index=0, time=None)[source]

Generate OMAS data structure from kEQDSK.

Currently this fuction just reads code_parameters. In the future, parameters including ITIME,PLASMA,EXPMP2,COILS,BTOR,DENR,DENV, SIREF,BRSP,ECURRT,VLOOP,DFLUX,SIGDLC,CURRC79,CURRC139,CURRC199,CURRIU30,CURRIU90,CURRIU150, CURRIL30,CURRIL90, CURRIL150 should be written to ods raw parameters.

Parameters
  • ods – input ods to which data is added

  • time_index – time index to which data is added to ods

  • time – time in seconds where to compare the data (if set it superseeds time_index)

Returns

ODS

compare_omas_constraints(ods, time_index=None, time=None, plot_invalid=False)[source]

Plots comparing constraints in the kEQDSK with respect to what are in an ODS

Parameters
  • ods – ods to use for comparison

  • time_index – force time_index of the ods constraint to compare

  • time – force time in seconds where to compare the ods data (if set it superseeds time_index)

  • plot_invalid – toggle plotting of data points that are marked to be in kfile

Returns

figure handler

plot_press_constraint(in1=None, fig=None, ax=None, label='', color=None, no_extra_info_in_legend=False, no_legend=False, no_marks_for_uniform_weight=True)[source]

Plots pressure constrait in kEQDSK. For general information on K-FILES, see - https://fusion.gat.com/theory/Efitinputs - https://fusion.gat.com/theory/Efitin1 Specific quantities related to extracting the pressure profile —— KPRFIT: kinetic fitting mode: 0 off, 1 vs psi, 2 vs R-Z, 3 includes rotation NPRESS: number of valid points in PRESSR; positive number: number of input data, negative number:

read in data from EDAT file, 0: rotational pressure only

RPRESS: -: input pressure profile as a function of dimensionless fluxes (psi_N), +:

R coordinates of input pressure profile in m

ZPRESS: gives Z coordinates to go with R coordinates in RPRESS if RPRESS>0 PRESSR: pressure in N/m^2 (or Pa) vs. normalized flux (psi_N) for fitting PRESSBI: pressure at boundary SIGPREBI: standard deviation for pressure at boundary PRESSBI KPRESSB: 0: don’t put a pressure point at boundary (Default), 1: put a pressure point at the boundary Specific quantities related to understanding KNOTS & basis functions —— KPPFNC basis function for P’: 0 = polynomial, 6 = spline KPPCUR number of coefficients for poly representation of P’, ignored if spline. Default = 3 KPPKNT number of knots for P’ spline, ignored unless KPPFNC=6 PPKNT P’ knot locations in psi_N, vector length KPPKNT, ignored unless KPPFNC=6 PPTENS spline tension for P’. Large (like 10) —> approaches piecewise linear. small (like 0.1)

—> like a cubic spline

KPPBDRY constraint switch for P’. Vector of length KPPKNT. Ignored unless KPPFNC=6 PPBDRY values of P’ at each knot location where KPPBDRY=1 KPP2BDRY on/off for PP2BDRY PP2BDRY values of (P’)’ at each knot location where KPP2BDRY=1

Parameters
  • in1 – NamelistName instance

  • fig – Figure instance (unused, but accepted to maintain consistent format)

  • ax – Axes instance

  • label – string

  • color – Matplotlib color specification

  • no_extra_info_in_legend – bool

  • no_legend – bool

  • no_marks_for_uniform_weight – bool

Returns

Matplotlib color specification

plot_fast_ion_constraints(in1=None, fig=None, ax=None, label='', color=None, density=False, no_extra_info_in_legend=False, no_legend=False)[source]

Documentation on fast ion information in K-FILES: https://fusion.gat.com/theory/Efitin1 https://fusion.gat.com/theory/Efitinputs — KPRFIT: kinetic fitting mode: 0 off, 1 vs psi, 2 vs R-Z, 3 includes rotation NBEAM: number of points for beam data in kinetic mode (in vector DNBEAM) DNBEAM: beam particle density for kinetic EFIT PBEAM: beam pressure in Pa vs psi_N for kinetic fitting PNBEAM: defaults to 0. That is all we know SIBEAM: psi_N values corresponding to PBEAM

Parameters
  • in1 – NamelistName

  • fig – Figure instance (unused)

  • ax – Axes instance

  • label – string

  • color – Matplotlib color spec

  • density – bool

  • no_extra_info_in_legend – bool

  • no_legend – bool

Returns

Matplotlib color spec

plot_current_constraint(in1=None, fig=None, ax=None, label='', color=None, no_extra_info_in_legend=False, no_legend=False)[source]

K-FILES see documentation on IN1 namelist in k-file: https://fusion.gat.com/theory/Efitin1 see https://fusion.gat.com/theory/Efitinputs KZEROJ: constrain FF’ and P’ by applying constraints specified by RZEROJ

>0: number of constraints to apply 0: don’t apply constraints (default)

SIZEROJ: vector of locations at which Jt is constrainted when KZEROJ>0.

When KZEROJ=1, PSIWANT can be used instead of SIZEROJ(1) by setting SIZEROJ(1)<0 see KZEROJ, RZEROJ, VZEROJ, PSIWANT default SIZEROJ(1)=-1.0

RZEROJ: vector of radii at which to apply constraints.
For each element in vector & corresponding elements in SIZEROJ, VZEROJ, if

RZEROJ>0: set Jt=0 @ coordinate RZEROJ,SIZEROJ RZEROJ=0: set flux surface average current equal to VZEROJ @ surface specified by normalized flux SIZEROJ RZEROJ<0: set Jt=0 @ separatrix

applied only if KZEROJ>0. Default RZEROJ(1)=0.0

If KZEROJ=1, may specify SIZEROJ(1) w/ PSIWANT. If KZEROJ=1 and SIZEROJ(1)<0 then SIZEROJ(1) is set equal to PSIWANT

PSIWANT: normalized flux value of surface where J constraint is desired.

See KZEROJ, RZEROJ, VZEROJ. Default=1.0

VZEROJ: Desired value(s) of normalized J (w.r.t. I/area) at

the flux surface PSIWANT (or surfaces SIZEROJ). Must have KZEROJ = 1 or >1 and RZEROJ=0.0. Default=0.0

summary: you should set k to some number of constraint points, then use the SIZEROJ and VZEROJ vectors to set up the psi_N and Jt values at the constraint points KNOTS & basis functions KFFFNC basis function for FF’: 0 polynomial, 6 = spline ICPROF specific choice of current profile: 0 = current profile is not specified by this variable,

1 = no edge current density allowed 2 = free edge current density 3 = weak edge current density constraint

KFFCUR number of coefficients for poly representation of FF’, ignored if spline. Default = 1 KFFKNT number of knots for FF’. Ignored unless KFFFNC=6 FFKNT knot locations for FF’ in psi_N, vector length should be KFFKNT. Ignored unless kfffnc=6 FFTENS spline tension for FF’. Large (like 10) —> approaches piecewise linear. small (like 0.1) —> like a cubic spline KFFBDRY constraint switch for FF’ (0/1 off/on) for each knot. default to zeros FFBDRY value of FF’ for each knot, used only when KFFBDRY=1 KFF2BDRY: on/off (1/0) switch for each knot FF2BDRY value of (FF’)’ for each knot, used only when KFF2BDRY=1

Parameters
  • in1 – NamelistName

  • fig – Figure

  • ax – Axes

  • label – string

  • color – Matplotlib color spec

  • no_extra_info_in_legend – bool

  • no_legend – bool

Returns

Matplotlib color spec

plot_mse(in1=None, fig=None, ax=None, label='', color=None, no_extra_info_in_legend=False, no_legend=False)[source]

K-FILES plot MSE constraints see https://fusion.gat.com/theory/Efitinputs RRRGAM R in meters of the MSE observation point ZZZGAM Z in meters of the MSE observation point TGAMMA “tangent gamma”. Tangent of the measured MSE polarization angle, TGAMMA=(A1*Bz+A5*Er)/(A2*Bt+…) SGAMMA standard deviation (uncertainty) for TGAMMA FWTGAM “fit weight gamma”: 1/0 on/off switches for MSE channels DTMSEFULL full width of MSE dat time average window in ms AA#GAM where # is 1,2,3,…: geometric correction coefficients for MSE data, generated by EFIT during mode 5

Parameters
  • in1 – NamelistName instance

  • fig – Figure instance

  • ax – Axes instance

  • label – string

  • color – Matplotlib color spec

  • no_extra_info_in_legend – bool

  • no_legend – bool

Returns

Matplotlib color spec

plot_mass_density(combo=None, fig=None, ax=None, label='', color=None, no_legend=False)[source]

K-files plot mass density profile see https://fusion.gat.com/theory/Efitin1 NMASS: number of valid points in DMASS DMASS: density mass. Mass density in kg/m^3 I am ASSUMING that this uses RPRESS (psi or R_major for pressure constraint) to get the position coordinates

Parameters
  • combo – NamelistName instance

  • fig – Figure instance

  • ax – Axes instance

  • label – string

  • color – mpl color spec

  • no_legend – bool

Returns

Matplotlib color spec

plot_pressure_and_current_constraints(in1=None, fig=None, ax=None, label='', color=None, no_extra_info_in_legend=False, no_legend=False, no_marks_for_uniform_weight=True)[source]

Plot pressure and current constraints together

plot_everything(combo=None, fig=None, ax=None, label='', color=None, no_extra_info_in_legend=False, no_legend=False, no_marks_for_uniform_weight=True)[source]

Plot pressure, mass density, current, fast ion pressure, fast ion density, MSE

plot(plottype=None, fig=None, ax=None, label='', color=None, no_extra_info_in_legend=False, no_legend=False)[source]

Plot manager for k-file class OMFITkeqdsk Function used to decide what real plot function to call and to apply generic plot labels. You can also access the individual plots directly, but you won’t get the final annotations. EFIT k-file inputs are documented at https://fusion.gat.com/theory/Efitinputs :param plottype: string

What kind of plot?
  • ‘everything’

  • ‘pressure and current’

  • ‘pressure’

  • ‘current’

  • ‘fast ion density’

  • ‘fast ion pressure’

  • ‘mse’

  • ‘mass density’

Parameters
  • fig – [Optional] Figure instance Define fig and ax to override automatic determination of plot output destination.

  • ax – [Optional] Axes instance or array of Axes instances Define fig and ax to override automatic determination of plot output destination. Ignored if there are not enough subplots to contain the plots ordered by plottype.

  • label – [Optional] string Provide a custom label to include in legends. May be useful when overlaying two k-files. Default: ‘’. Set label=None to disable shot and time in legend entries.

  • no_extra_info_in_legend – bool Do not add extra text labels to the legend to display things like Bt, etc.

  • no_legend – bool Do not add legends to the plots

class omfit_classes.omfit_eqdsk.OMFITmeqdsk(*args, **kwargs)[source]

Bases: omfit_classes.omfit_nc.OMFITnc

class used to interface M files generated by EFIT :param filename: filename passed to OMFITascii class

Parameters

**kw – keyword dictionary passed to OMFITascii class

signal_info = {'ecc': ['cecurr', 'eccurt', 'fwtec', None, 'E-coil current', 'e'], 'fcc': ['ccbrsp', 'fccurt', 'fwtfc', None, 'F-coil current', 'f'], 'gam': ['cmgam', 'tangam', None, 'siggam', 'MSE tan(gamma)', 'g'], 'lop': ['csilop', 'silopt', 'fwtsi', None, 'flux loops', 'l'], 'mag': ['cmpr2', 'expmpi', 'fwtmp2', None, 'magnetics', 'm'], 'pre': ['cpress', 'pressr', None, 'sigpre', 'pressure', 'p'], 'xxj': ['aux_calxxj', 'aux_mxxj', 'aux_fwtxxj', 'aux_sigxxj', 'current density', 'j']}
pack_it(x, tag, name, dim1=None, is_tmp=True)[source]

Utility function for saving results into the mEQDSK as new OMFITncData instances. :param x: array of data to pack

Parameters
  • tag – string (SHORT: dictionary key is derived from this)

  • name – string (LONG: this is a description that goes in a field)

  • dim1 – string or None: name of dimension other than time

  • is_tmp – bool: Choose OMFITncDataTmp() instead of OMFITncData() to prevent saving

residual_normality_test(which='mag', is_tmp='some')[source]

Tests whether residuals conform to normal distribution and saves a P-value. A good fit should have random residuals following a normal distribution (due to random measurement errors following a normal distribution). The P-value assesses the probability that the distribution of residuals could be at least as far from a normal distribution as are the measurements. A low P-value is indicative of a bad model or some other problem. https://www.graphpad.com/guides/prism/5/user-guide/prism5help.html?reg_diagnostics_tab_5_3.htm https://www.graphpad.com/support/faqid/1577/

Parameters
  • which – string Parameter to do test on. Options: [‘mag’, ‘ecc’, ‘fcc’, ‘lop’, ‘gam’, ‘pre’, ‘xxj’]

  • is_tmp – string How many of the stats quantities should be loaded as OMFITncDataTmp (not saved to disk)? ‘some’, ‘none’, or ‘all’

rsq_test(which='mag', is_tmp='some')[source]

Calculates R^2 value for fit to a category of signals (like magnetics). The result will be saved into the mEQDSK instance as stats_rsq***. R^2 measures the fraction of variance in the data which is explained by the model and can range from 1 (model explains data) through 0 (model does no better than a flat line through the average) to -1 (model goes exactly the wrong way). https://www.graphpad.com/guides/prism/5/user-guide/prism5help.html?reg_diagnostics_tab_5_3.htm

Parameters
  • which – string (See residual_normality_test doc)

  • is_tmp – string (See residual_normality_test doc)

combo_rsq_tests(is_tmp='some')[source]

Combined R^2 from various groups of signals, including ‘all’. Needs stats_sst* and stats_ssr* and will call rsq_test to make them if not already available.

Parameters

is_tmp – string (See residual_normality_test doc)

plot(**kw)[source]

Function used to plot chisquares stored in m-files This method is called by .plot() when the object is a m-file

to_omas(ods=None, time_index=0, time_index_efit=0)[source]

translate mEQDSK class to OMAS data structure

Parameters
  • ods – input ods to which data is added

  • time_index – time index to which data is added to ods

  • time_index_efit – time index from mEQDSK

Returns

ODS

class omfit_classes.omfit_eqdsk.OMFITseqdsk(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

class used to interface S files generated by EFIT

Parameters
  • filename – filename passed to OMFITascii class

  • **kw – keyword dictionary passed to OMFITascii class

load(**kw)[source]
save()[source]

The save method is supposed to be overridden by classes which use OMFITobject as a superclass. If left as it is this method can detect if .filename was changed and if so, makes a copy from the original .filename (saved in the .link attribute) to the new .filename

plot(**kw)[source]
class omfit_classes.omfit_eqdsk.fluxSurfaces(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict

Trace flux surfaces and calculate flux-surface averaged and geometric quantities Inputs can be tables of PSI and Bt or an OMFITgeqdsk file

Parameters
  • Rin – (ignored if gEQDSK!=None) array of the R grid mesh

  • Zin – (ignored if gEQDSK!=None) array of the Z grid mesh

  • PSIin – (ignored if gEQDSK!=None) PSI defined on the R/Z grid

  • Btin – (ignored if gEQDSK!=None) Bt defined on the R/Z grid

  • Rcenter – (ignored if gEQDSK!=None) Radial location where the vacuum field is defined ( B0 = F[-1] / Rcenter)

  • F – (ignored if gEQDSK!=None) F-poloidal

  • P – (ignored if gEQDSK!=None) pressure

  • rlim – (ignored if gEQDSK!=None) array of limiter r points (used for SOL)

  • zlim – (ignored if gEQDSK!=None) array of limiter z points (used for SOL)

  • gEQDSK – OMFITgeqdsk file or ODS

  • resolution – if int the original equilibrium grid will be multiplied by (resolution+1), if float the original equilibrium grid is interpolated to that resolution (in meters)

  • forceFindSeparatrix – force finding of separatrix even though this may be already available in the gEQDSK file

  • levels – levels in normalized psi. Can be an array ranging from 0 to 1, or the number of flux surfaces

  • map – array ranging from 0 to 1 which will be used to set the levels, or ‘rho’ if flux surfaces are generated based on gEQDSK

  • maxPSI – (default 0.9999)

  • calculateAvgGeo – Boolean which sets whether flux-surface averaged and geometric quantities are automatically calculated

  • quiet – Verbosity level

  • **kw – overwrite key entries

>> OMFIT[‘test’]=OMFITgeqdsk(OMFITsrc+’/../samples/g133221.01000’) >> # copy the original flux surfaces >> flx=copy.deepcopy(OMFIT[‘test’][‘fluxSurfaces’]) >> # to use PSI >> mapping=None >> # to use RHO instead of PSI >> mapping=OMFIT[‘test’][‘RHOVN’] >> # trace flux surfaces >> flx.findSurfaces(np.linspace(0,1,100),mapping=map) >> # to increase the accuracy of the flux surface tracing (higher numbers –> smoother surfaces, more time, more memory) >> flx.changeResolution(2) >> # plot >> flx.plot()

load()[source]
findSurfaces(levels=None, map=None)[source]

Find flux surfaces at levels

Parameters

levels – defines at which levels the flux surfaces will be traced

  • None: use levels defined in gFile

  • Integer: number of levels

  • list: list of levels to find surfaces at

Parameters

map – psi mapping on which levels are defined (e.g. rho as function of psi)

changeResolution(resolution)[source]
Parameters

resolution – resolution to use when tracing flux surfaces

  • integer: multiplier of the original table

  • float: grid resolution in meters

resample(npts=None, technique='uniform', phase='Xpoint')[source]

resample number of points on flux surfaces

Parameters
  • npts – number of points

  • technique – ‘uniform’,’separatrix’,’pest’

  • phase – float for poloidal angle or ‘Xpoint’

surfAvg(function=None)[source]

Flux surface averaged quantity for each flux surface

Parameters

function – function which returns the value of the quantity to be flux surface averaged at coordinates r,z

Returns

array of the quantity fluxs surface averaged for each flux surface

Example

>> def test_avg_function(r, z): >> return RectBivariateSplineNaN(Z, R, PSI, k=1).ev(z,r)

surface_integral(what)[source]

Cross section integral of a quantity

Parameters

what – quantity to be integrated specified as array at flux surface

Returns

array of the integration from core to edge

volume_integral(what)[source]

Volume integral of a quantity

Parameters

what – quantity to be integrated specified as array at flux surface

Returns

array of the integration from core to edge

plotFigure(*args, **kw)[source]
plot(only2D=False, info=False, label=None, **kw)[source]
rz_miller_geometry(poloidal_resolution=101)[source]

return R,Z coordinates for all flux surfaces from miller geometry coefficients in geo # based on gacode/gapy/src/gapy_geo.f90

Parameters

poloidal_resolution – integer with number of equispaced points in toroidal angle, or array of toroidal angles

Returns

2D arrays with (R, Z) flux surface coordinates

sol(levels=31, packing=3, resolution=0.01, rlim=None, zlim=None, open_flx=None)[source]

Trace open field lines flux surfaces in the SOL

Parameters
  • levels

    where flux surfaces should be traced

    • integer number of flux surface

    • list of levels

  • packing – if levels is integer, packing of flux surfaces close to the separatrix

  • resolution – accuracy of the flux surface tracing

  • rlim – list of R coordinates points where flux surfaces intersect limiter

  • zlim – list of Z coordinates points where flux surfaces intersect limiter

  • open_flx – dictionary with flux surface rhon value as keys of where to calculate SOL (passing this will not set the sol entry in the flux-surfaces class)

Returns

dictionary with SOL flux surface information

to_omas(ods=None, time_index=0)[source]

translate fluxSurfaces class to OMAS data structure

Parameters
  • ods – input ods to which data is added

  • time_index – time index to which data is added

Returns

ODS

from_omas(ods, time_index=0)[source]

populate fluxSurfaces class from OMAS data structure

Parameters
  • ods – input ods to which data is added

  • time_index – time index to which data is added

Returns

ODS

class omfit_classes.omfit_eqdsk.fluxSurfaceTraces(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict

deploy(filename=None, frm='arrays')[source]
load(filename)[source]
omfit_classes.omfit_eqdsk.boundaryShape(a, eps, kapu, kapl, delu, dell, zetaou, zetaiu, zetail, zetaol, zoffset, upnull=False, lonull=False, npts=90, doPlot=False, newsq=array([0.0, 0.0, 0.0, 0.0]), **kw)[source]

Function used to generate boundary shapes based on T. C. Luce, PPCF, 55 9 (2013) Direct Python translation of the IDL program /u/luce/idl/shapemaker3.pro

Parameters
  • a – minor radius

  • eps – aspect ratio

  • kapu – upper elongation

  • lkap – lower elongation

  • delu – upper triangularity

  • dell – lower triangularity

  • zetaou – upper outer squareness

  • zetaiu – upper inner squareness

  • zetail – lower inner squareness

  • zetaol – lower outer squareness

  • zoffset – z-offset

  • upnull – toggle upper x-point

  • lonull – toggle lower x-point

  • npts – int number of points (per quadrant)

  • doPlot – plot boundary shape construction

  • newsq – A 4 element array, into which the new squareness values are stored

Returns

tuple with arrays of r,z,zref

>> boundaryShape(a=0.608,eps=0.374,kapu=1.920,kapl=1.719,delu=0.769,dell=0.463,zetaou=-0.155,zetaiu=-0.255,zetail=-0.174,zetaol=-0.227,zoffset=0.000,upnull=False,lonull=False,doPlot=True)

class omfit_classes.omfit_eqdsk.BoundaryShape(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict

Class used to generate boundary shapes based on T. C. Luce, PPCF, 55 9 (2013)

Parameters
  • a – minor radius

  • eps – inverse aspect ratio (a/R)

  • kapu – upper elongation

  • kapl – lower elongation

  • delu – upper triangularity

  • dell – lower triangularity

  • zetaou – upper outer squareness

  • zetaiu – upper inner squareness

  • zetail – lower inner squareness

  • zetaol – lower outer squareness

  • zoffset – z-offset

  • upnull – toggle upper x-point

  • lonull – toggle lower x-point

  • rbbbs – R boundary points

  • zbbbs – Z boundary points

  • rlim – R limiter points

  • zlim – Z limiter points

  • npts – int number of points (per quadrant)

  • doPlot – plot boundary shape

Returns

tuple with arrays of r,z,zref

>> BoundaryShape(a=0.608,eps=0.374,kapu=1.920,kapl=1.719,delu=0.769,dell=0.463,zetaou=-0.155,zetaiu=-0.255,zetail=-0.174,zetaol=-0.227,zoffset=0.000,doPlot=True)

plot(fig=None)[source]
sameBoundaryShapeAs(rbbbs=None, zbbbs=None, upnull=None, lonull=None, gEQDSK=None, verbose=None, npts=90)[source]

Measure boundary shape from input

Parameters
  • rbbbs – array of R points to match

  • zbbbs – array of Z points to match

  • upnull – upper x-point

  • lonull – lower x-point

  • gEQDSK – input gEQDSK to match (wins over rbbbs and zbbbs)

  • verbose – print debug statements

  • npts – int Number of points

Returns

dictionary with parameters to feed to the boundaryShape function [a, eps, kapu, kapl, delu, dell, zetaou, zetaiu, zetail, zetaol, zoffset, upnull, lonull]

fitBoundaryShape(rbbbs=None, zbbbs=None, upnull=None, lonull=None, gEQDSK=None, verbose=None, precision=0.001, npts=90)[source]

Fit boundary shape from input

Parameters
  • rbbbs – array of R points to match

  • zbbbs – array of Z points to match

  • upnull – upper x-point

  • lonull – lower x-point

  • gEQDSK – input gEQDSK to match (wins over rbbbs and zbbbs)

  • verbose – print debug statements

  • doPlot – visualize match

  • precision – optimization tolerance

  • npts – int Number of points

Returns

dictionary with parameters to feed to the boundaryShape function [a, eps, kapu, kapl, delu, dell, zetaou, zetaiu, zetail, zetaol, zoffset, upnull, lonull]

omfit_classes.omfit_eqdsk.fluxGeo(inputR, inputZ, lcfs=False, doPlot=False)[source]

Calculate geometric properties of a single flux surface

Parameters
  • inputR – R points

  • inputZ – Z points

  • lcfs – whether this is the last closed flux surface (for sharp feature of x-points)

  • doPlot – plot geometric measurements

Returns

dictionary with geometric quantities

omfit_classes.omfit_eqdsk.rz_miller(a, R, kappa=1.0, delta=0.0, zeta=0.0, zmag=0.0, poloidal_resolution=101)[source]

return R,Z coordinates for all flux surfaces from miller geometry coefficients in input.profiles file based on gacode/gapy/src/gapy_geo.f90

Parameters
  • a – minor radius

  • R – major radius

  • kappa – elongation

  • delta – triandularity

  • zeta – squareness

  • zmag – z offset

  • poloidal_resolution – integer with number of equispaced points in toroidal angle, or array of toroidal angles

Returns

1D arrays with (R, Z) flux surface coordinates

omfit_classes.omfit_eqdsk.miller_derived(rmin, rmaj, kappa, delta, zeta, zmag, q)[source]

Originally adapted by A. Tema from FORTRAN of gacode/shared/GEO/GEO_do.f90

Parameters
  • rmin – minor radius

  • rmaj – major radius

  • kappa – elongation

  • delta – triangularity

  • zeta – squareness

  • zmag – z magnetic axis

  • q – safety factor

Returns

dictionary with volume, grad_r0, bp0, bt0

omfit_error

class omfit_classes.omfit_error.OMFITerror(error='Error', traceback=None)[source]

Bases: object

class omfit_classes.omfit_error.OMFITobjectError(*args, **kwargs)[source]

Bases: omfit_classes.omfit_error.OMFITerror, omfit_classes.startup_framework.OMFITobject, omfit_classes.sortedDict.SortedDict

This class is a subclass of OMFITobject and is used in OMFIT when loading of an OMFITobject subclass object goes wrong during the loading of a project.

Note that the orifinal file from which the loading failed is not lost but can be accessed from the .filename attribute of this object.

class omfit_classes.omfit_error.OMFITexpressionError(error='Expression Error', **kw)[source]

Bases: omfit_classes.omfit_error.OMFITerror

omfit_execution_diagram

class omfit_classes.omfit_execution_diagram.OMFITexecutionDiagram(*args, **kwargs)[source]

Bases: omfit_classes.omfit_ascii.OMFITascii, omfit_classes.sortedDict.SortedDict

load()[source]
print_sorted(by='time', nitems=10)[source]

omfit_fastran

class omfit_classes.omfit_fastran.OMFITfastran(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

FASTRAN data files

load()[source]
save()[source]

The save method is supposed to be overridden by classes which use OMFITobject as a superclass. If left as it is this method can detect if .filename was changed and if so, makes a copy from the original .filename (saved in the .link attribute) to the new .filename

omfit_focus

class omfit_classes.omfit_focus.OMFITfocusboundary(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

OMFITobject used to interface with boundary ascii files used in FOCUS.

Parameters

filename – filename passed to OMFITascii class

All additional key word arguments passed to OMFITascii

load()[source]

Load the file and parse it into a sorted dictionary

save()[source]

Write the data in standard ascii format

plot(nd=3, ax=None, cmap='RdBu_r', vmin=None, vmax=None, colorbar=True, nzeta=60, ntheta=120, force_zeta=None, **kwargs)[source]

Plot the normal field on the 3D boundary surface or as a contour by angle.

Parameters
  • nd – int. Choose 3 to plot a surface in 3D, and 2 to plot a contour in angle.

  • ax – Axes. Axis into which the plot will be made.

  • cmap – string. A valid matplolib color map name.

  • vmin – float. Minimum of the color scaling.

  • vmax – float. Maximum of the color scaling.

  • colorbar – bool. Draw a colorbar.

  • nzeta – int. Number of points in the toroidal angle.

  • ntheta – int. Number of points in the poloidal angle (must be integer multiple of nzeta).

  • force_zeta – list. Specific toroidal angle points (in degrees) used instead of nzeta grid. When nd=1, defaults to [0].

  • **kwargs – dict. All other key word arguments are passed to the mplot3d plot_trisurf function.

Returns

Figure.

class omfit_classes.omfit_focus.OMFITfocusharmonics(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

OMFITobject used to interface with harmonics ascii files used in FOCUS.

Parameters

filename – filename passed to OMFITascii class

All additional key word arguments passed to OMFITascii

load()[source]

Load the file and parse it into a sorted dictionary

save()[source]

Write the data in standard ascii format

plot(axes=None, **kwargs)[source]

Plot the normal field spectra and weighting.

Parameters
  • ax – list. 3 Axes into which the amplitude, phase, and weight plots will be made.

  • **kwargs – dict. All other key word arguments are passed to the plot function.

Returns

list. All the lines plotted.

omfit_formatter

omfit_classes.omfit_formatter.omfit_formatter(content)[source]

Format Python string according to OMFIT style Based on BLACK: https://github.com/psf/black with 140 chars Equivalent to running: black -S -l 140 -t py36 filename

NOTE: some versions of black has issues when a comma trails a parenthesis Version 19.3b0 is ok

Parameters

content – string with Python code to format

Returns

formatted Python code None if nothing changed False if formatting was skipped due to an InvalidInput

omfit_classes.omfit_formatter.omfit_file_formatter(filename, overwrite=True)[source]

Format Python file according to OMFIT style Based on BLACK: https://github.com/psf/black with 140 chars Equivalent to running: black -S -l 140 -t py36 filename

Parameters
  • filename – filename of the Python file to format If a directory is passed, then all files ending with .py will be processed

  • overwrite – overwrite original file or simply return if the file has changed

Returns

formatted Python code None if nothing changed False if style enforcement is skipped or the input was invalid If a directory, then a dictionary with each processed file as key is returned

omfit_gacode

class omfit_classes.omfit_gacode.OMFITgacode(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

OMFIT class used to interface with GAcode input.XXX files This class supports .gen, .extra, .geo, .profiles file

.plot() method is available for .profiles files

Parameters
  • GACODEtype – force ‘profiles’ parsing input.profiles format or use None for autodetection based on file name

  • filename – filename passed to OMFITascii class

  • **kw – keyword dictionary passed to OMFITascii class

property GACODEtype
load()[source]

Method used to load the content of the file specified in the .filename attribute

Returns

None

save()[source]

Method used to save the content of the object to the file specified in the .filename attribute

Returns

None

class omfit_classes.omfit_gacode.OMFITtgyro(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.startup_framework.OMFITobject, pygacode.tgyro.data.tgyrodata

Class used to interface with TGYRO results directory

Parameters

filename – directory where the TGYRO result files are stored. The data in this directory will be loaded upon creation of the object.

load()[source]
plot_profiles_evolution(quantity=None, x='rho', ax=None, colors=None)[source]

plot evolution in input.gacode.XXX of the quantity

Parameters
  • quantity – quantity to be plotted (plots Te, Ti, ne, omega0 if quantity is None)

  • x – x axis, typically rho or rmin

  • ax – axis to plot in (only if quantity is not None)

  • colors

get_residual()[source]
sprofile(what, nf0=201, x='r/a', verbose=False)[source]

This function returns smooth profiles on a uniform [‘r/a’, ‘rho’, ‘rmin’, ‘rmaj/a’] grid

Parameters
  • what – what profile to return [‘r/a’, ‘rho’, ‘rmin’, ‘rmaj/a’, ‘te’, ‘a/LTe’, ‘ne’, ‘a/Lne’ , ‘ti’, ‘a/LTi’, ‘ni’, ‘a/Lni’, ‘M=wR/cs’]

  • nf0 – number of points

  • x – return profiles equally spaced in ‘r/a’, ‘rho’, ‘rmin’, ‘rmaj/a’

  • verbose – plotting of the what quantity

Returns

what quantity (nf0 x niterations) at the x locations

Note: all TGYRO quantities are naturally defined over ‘r/a’

example

>> tmp=OMFITtgyro(‘…’) >> x=’rho’ >> y=’te’ >> pyplot.plot(tmp[x].T,tmp[y].T,’or’) >> X=tmp.sprofile(x, nf0=201, x=x, verbose=False) >> Y=tmp.sprofile(y, nf0=201, x=x, verbose=False) >> pyplot.plot(X,Y,’b’,alpha=0.25)

jacobian(return_matrix=False)[source]
Parameters

return_matrix – return jacobian as dictionary or matrix

Returns

dictionary with the jacobians of the last iteration as calculated by TGYRO

plot(**kw)[source]

This function plots ne, ni, te, ti for every iteration run by TGYRO

Returns

None

plotFigure(*args, **kw)[source]

This function plots ne, ni, te, ti for every iteration run by TGYRO

Returns

None

calcQ(it_index=- 1)[source]

Calculate and return the fusion energy gain factor, Q, assuming D+T->alpha+neutron is the main reaction

Parameters

it_index – Indicates for which iteration to calculate Q

class omfit_classes.omfit_gacode.OMFITgyro(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.startup_framework.OMFITobject, pygacode.gyro.data_plot.gyrodata_plot

Class used to interface with GYRO results directory

This class extends the OMFITgyro class with the save/load methods of the OMFITpath class so that the save/load carries all the original files from the GYRO output

Parameters
  • filename – directory where the GYRO result files are stored. The data in this directory will be loaded upon creation of the object.

  • extra_files – Any extra files that should be downloaded from the remote location

  • test_mode – Don’t raise an exception if out.gyro.t is not present (and abort loading at that point)

load()[source]
plot_balloon(**kw)

Plot the ballooning eigenmode structure

ARGUMENTS: index: one of ‘phi’,’a’,’aperp’ or an integer tmax : max value of (extended angle)/pi

plot_freq(**kw)

Plot gamma and omega vs time

ARGUMENTS: w: fractional width of time window

plot_gbflux(**kw)

Plot the gyrobohm flux versus time.

plot_gbflux_exc(**kw)
plot_gbflux_i(**kw)

Plot flux versus radius

plot_gbflux_n(**kw)

Plot ky-dependent flux

plot_gbflux_rt(**kw)
plot_moment_zero(**kw)
plot_phi(**kw)

Plot the n=0 AND n>0 potentials versus time.

ARGUMENTS: ymax : max vertical plot range

plot_profile_tot(**kw)
plot_zf(**kw)

Plot the zonal (n=0) potential versus time.

ARGUMENTS: w: fractional time-average window

class omfit_classes.omfit_gacode.OMFITcgyro(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.startup_framework.OMFITobject, pygacode.cgyro.data_plot.cgyrodata_plot

Class used to interface with CGYRO results directory

Parameters
  • filename – directory where the CGYRO result files are stored. The data in this directory will be loaded upon creation of the object.

  • extra_files – Any extra files that should be downloaded from the remote location

  • test_mode – Don’t raise an exception if out.gyro.t is not present (and abort loading at that point)

load()[source]
plot_ball(**kw)
plot_corrug(**kw)
plot_error(**kw)
plot_flux(**kw)
plot_freq(**kw)
plot_geo(**kw)
plot_hb(**kw)
plot_hball(**kw)
plot_hbcut(**kw)
plot_kx_phi(**kw)
plot_kxky_phi(**kw)
plot_ky_flux(**kw)
plot_ky_freq(**kw)
plot_ky_phi(**kw)
plot_low(**kw)
plot_phi(**kw)
plot_rcorr_phi(**kw)
plot_shift(**kw)
plot_xflux(**kw)
plot_zf(**kw)
class omfit_classes.omfit_gacode.OMFITneo(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.startup_framework.OMFITobject, pygacode.neo.data.NEOData

Class used to interface with NEO results directory

Parameters

filename – directory where the TGYRO result files are stored. The data in this directory will be loaded upon creation of the object.

load()[source]
omfit_classes.omfit_gacode.copy_gacode_module_settings(root=None)[source]

Utility function to sync all GACODE modules to use the same working environment

Parameters

root – GACODE module to use as template (if None only sets module[‘SETTINGS’][‘SETUP’][‘branch’])

omfit_gaprofiles

class omfit_classes.omfit_gaprofiles.OMFITplasma_cer(*args, **kwargs)[source]

Bases: omfit_classes.omfit_ascii.OMFITascii, omfit_classes.sortedDict.SortedDict

Process a GAprofiles dplasma_cer_foramt.shot.time file

Parameters
  • filename – The path to the file

  • **kw – Passed to OMFITascii.__init__

load()[source]

omfit_gapy

class omfit_classes.omfit_gapy.OMFITgacode(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

OMFIT class used to interface with GAcode input.XXX files This class supports .gen, .extra, .geo, .profiles file

.plot() method is available for .profiles files

Parameters
  • GACODEtype – force ‘profiles’ parsing input.profiles format or use None for autodetection based on file name

  • filename – filename passed to OMFITascii class

  • **kw – keyword dictionary passed to OMFITascii class

property GACODEtype
load()[source]

Method used to load the content of the file specified in the .filename attribute

Returns

None

save()[source]

Method used to save the content of the object to the file specified in the .filename attribute

Returns

None

class omfit_classes.omfit_gapy.OMFITinputprofiles(*args, **kwargs)[source]

Bases: omfit_classes.omfit_gapy.OMFITgacode, omfit_classes.omfit_gapy._gacode_profiles

load()[source]

Method used to load the content of the file specified in the .filename attribute

Returns

None

save()[source]

Method used to save the content of the object to the file specified in the .filename attribute

Returns

None

sort()[source]
Parameters

key – function that returns a string that is used for sorting or dictionary key whose content is used for sorting

>> tmp=SortedDict() >> for k in range(5): >> tmp[k]={} >> tmp[k][‘a’]=4-k >> # by dictionary key >> tmp.sort(key=’a’) >> # or equivalently >> tmp.sort(key=lambda x:tmp[x][‘a’])

Parameters

**kw – additional keywords passed to the underlying list sort command

Returns

sorted keys

consistent_derived()[source]

Enforce consistency the same way input.gacode format would

Returns

self

inputgacode()[source]

return a OMFITinputgacode object from the data contained in this object

Returns

OMFITinputgacode

class omfit_classes.omfit_gapy.OMFITinputgacode(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii, omfit_classes.gapy.Gapy, omfit_classes.omfit_gapy._gacode_profiles

property input_profiles_compatibility_mode
load()[source]

Load data in memory from file

Returns

self

sort()[source]
Parameters

key – function that returns a string that is used for sorting or dictionary key whose content is used for sorting

>> tmp=SortedDict() >> for k in range(5): >> tmp[k]={} >> tmp[k][‘a’]=4-k >> # by dictionary key >> tmp.sort(key=’a’) >> # or equivalently >> tmp.sort(key=lambda x:tmp[x][‘a’])

Parameters

**kw – additional keywords passed to the underlying list sort command

Returns

sorted keys

save()[source]

The save method is supposed to be overridden by classes which use OMFITobject as a superclass. If left as it is this method can detect if .filename was changed and if so, makes a copy from the original .filename (saved in the .link attribute) to the new .filename

inputprofiles(via_omas=False)[source]

return a OMFITinputprofiles object from the data contained in this object

Parameters

via_omas – make the transformation via OMAS

Returns

OMFITinputprofiles

inputtglf(irhos=None, inputtglf_objects=True)[source]

Evaluate TGLF input parameters at given radii

Parameters
  • irho – indices of desired input.tglf files. If None, return for all radii.

  • inputtglf_objects – if true create OMFITinputglf objects else use simple dictionaries

Returns

dictionary of input.tglf objects for each grid point in this object

omfit_gato

class omfit_classes.omfit_gato.OMFITdskgato(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

OMFIT class used to interface to equilibria files generated by GATO (.dskgato files)

NOTE: this object is “READ ONLY”, meaning that the changes to the entries of this object will not be saved to a file. Method .save() could be written if becomes necessary

Parameters
  • filename – filename passed to OMFITobject class

  • **kw – keyword dictionary passed to OMFITobject class

load()[source]
add_derived()[source]

Compute additional quantities that are not in the DSKGATO file itself

plot(usePsi=False, only2D=False, **kw)[source]

Function used to plot dskgato-files. This plot shows flux surfaces in the vessel, pressure, q profiles, P’ and FF’

Parameters
  • usePsi – In the plots, use psi instead of rho

  • only2D – only make flux surface plot

  • levels – list of sorted numeric values to pass to 2D plot as contour levels

  • label – plot item label to apply lines in 1D plots (only the q plot has legend called by the geqdsk class itself) and to the boundary contour in the 2D plot (this plot doesn’t call legend by itself)

  • ax – Axes instance to plot in when using only2D

  • **kw – Standard plot keywords (e.g. color, linewidth) will be passed to Axes.plot() calls.

class omfit_classes.omfit_gato.OMFITo4gta(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

OMFIT class used to interface with GATO o4gta files

Parameters
  • filename – filename passed to OMFITascii class

  • **kw – keyword dictionary passed to OMFITascii class

load()[source]
property modes
plot(asinh_transform=1.0)[source]
Parameters

asinh_transform – apply xi_mod = arcsinh(xi * asinh_transform) transformation

omfit_classes.omfit_gato.OMFITgatohelp(*args, **kw)[source]

generates help dictionary for GATO namelist starting from smaphXXX.f

Parameters
  • *args – arguments passed to OMFITascii object

  • **kw – keyworkd arguments passed to OMFITascii object

Returns

OMFITjson with parsed help definitions and default values

class omfit_classes.omfit_gato.OMFITgatoDiagnostic(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

OMFIT class used to interface with GATO diagnostic.dat files

Parameters
  • filename – filename passed to OMFITascii class

  • **kw – keyword dictionary passed to OMFITascii class

load()[source]
plot(item='xreal(j,i)')[source]
Parameters

item – is plotting item against R, Z grid

plot_modes(item='xreal(j,i)', tolerance=0.25)[source]
Parameters
  • item – plotting item against R, Z grid

  • tolerance – size of mode needed to be plotted relative to the largest mode

omfit_genray

class omfit_classes.omfit_genray.OMFITgenray(*args, **kwargs)[source]

Bases: omfit_classes.omfit_nc.OMFITnc

pr(inp)[source]
slow_fast()[source]
to_omas(ods=None, time_index=0, new_sources=True, n_rho=201)[source]

translate GENRAY class to OMAS data structure

Parameters
  • ods – input ods to which data is added

  • time_index – time index to which data is added

  • new_sources – wipe out existing sources

Returns

ODS

plot(gEQDSK=None, showSlowFast=False)[source]

omfit_gingred

class omfit_classes.omfit_gingred.OMFITgingred(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

OMFIT class used to interface with GINGRED input files

Parameters
  • filename – filename passed to OMFITascii class

  • **kw – keyword dictionary passed to OMFITascii class

load()[source]
save()[source]

The save method is supposed to be overridden by classes which use OMFITobject as a superclass. If left as it is this method can detect if .filename was changed and if so, makes a copy from the original .filename (saved in the .link attribute) to the new .filename

omfit_github

omfit_classes.omfit_github.get_gh_remote_org_repo_branch(**kw)[source]

Looks up local name for upstream remote repo, GitHub org, repository name, and current branch Like calling get_git_branch_info with no_pr_lookup=True

Returns

tuple containing 4 strings remote, org, repository, branch

omfit_classes.omfit_github.on_latest_gh_commit()[source]

Returns true if the current working commit is the same as the github latest commit for this branch.

omfit_classes.omfit_github.convert_gh_time_str_datetime(t)[source]

Convert a GitHub (gh) time string to a datetime object

Parameters

t – A time string like

omfit_classes.omfit_github.get_OMFIT_GitHub_token(token=None)[source]
Parameters

token – string or None Token for accessing GitHub None triggers attempt to decrypt from $GHUSER@token.github.com credential file Must be set up in advance with set_OMFIT_GitHub_token() function

Returns

GitHub token

omfit_classes.omfit_github.set_OMFIT_GitHub_token(token)[source]
Parameters

token – 40 chars Token string to be encrypted in $GHUSER@token.github.com credential file

class omfit_classes.omfit_github.OMFITgithub_paged_fetcher(org=None, repository=None, path='comments', token=None, selection=None)[source]

Bases: list

Interact with GitHub via the GitHub api: https://developer.github.com/v3/ to fetch data from a path https://api.github.com/repos/{org}/{repo}/{path} that has paged results

https://api.github.com/repos/{org}/{repo}/{path}

Parameters
  • org – string [optional] The organization that the repo is under, like ‘gafusion’. If None, attempts to lookup with method based on git rev-parse. Falls back to gafusion on failure.

  • repository – string [optional] The name of the repo on GitHub. If None, attempts to lookup with method based on git rev-parse. Falls back to OMFIT-source on failure.

  • path – string The part of the repo api to access

  • token – string or None Token for accessing GitHub None triggers attempt to decrypt from file (must be set up in advance). Passed to get_OMFIT_GitHub_token.

  • selection – dict A dictionary such as {‘state’:’all’}

find_last_page()[source]

Find the last page number and sets self.last_page

Returns

self.last_page

fetch()[source]

Fetch the paged results from GitHub and store them in self

Returns

self

get_sel_str(**kw)[source]

Figure out how to get the selection part of the url, such as ?state=all&page=1, etc.

Parameters

**kw – Any variables (such as page) to override self.selection set at initialization

Returns

An ampersand ‘&’ separated string of key=value (and the question mark)

property results
omfit_classes.omfit_github.get_pull_request_number(return_info=False, **kw)[source]

Gets the pull request number associated for the current git branch if there is an open pull request.

Passes parameters org, destination_org, branch, and repository to get_git_branch_info().

Parameters

return_info – bool [optional] Return a dictionary of information instead of just the pull request number

Returns

int, dict-like, or None Pull request number if one can be found, otherwise None. If return_info: dictionary returned by OMFITgithub_paged_fetcher with ‘org’ key added. Contains ‘number’, too.

omfit_classes.omfit_github.get_git_branch_info(remote=None, org=None, destination_org=None, repository=None, branch=None, url=None, omfit_fallback=True, no_pr_lookup=False, return_pr_destination_org=True, server=None)[source]

Looks up local name for upstream remote repo, GitHub org, repository name, current branch, & open pull request info All parameters are optional and should only be provided if trying to override some results

Parameters
  • remote – string [optional] Local name for the upstream remote. If None, attempts to lookup with method based on git rev-parse.

  • org – string [optional] The organization that the repo is under, like ‘gafusion’. If None, attempts to lookup with method based on git rev-parse. Falls back to gafusion on failure.

  • destination_org – string [optional] Used for cross-fork pull requests: specify the destination org of the pull request. The pull request actually exists on this org, but it is not where the source branch lives. If None it defaults to same as org

  • repository – string [optional] The name of the repo on GitHub. If None, attempts to lookup with method based on git rev-parse. Falls back to OMFIT-source on failure.

  • branch – string [optional] Local/remote name for the current branch NOTE: there is an assumption that the local and upstream branches have same name

  • url – string [optional] Provided mainly for testing. Overrides the url that would be returned by git config –get remote.origin.url.

  • omfit_fallback – bool Default org and repository are gafusion and OMFIT-source instead of None and None in case of failed lookup.

  • no_pr_lookup – bool Improve speed by skipping lookup of pull request number

  • return_pr_destination_org – bool If an open pull request is found, changes remote, org, repository, and branch to match the destination side of the pull request. If there is no pull request or this flag is False, remote/org/repo/branch will correspond to the source.

  • server – string [optional] The server of the remote - usually github.com, but could also be something like vali.gat.com.

Returns

tuple containing 4 strings and a dict, with elements to be replaced with None for lookup failure remote (str), org (str), repository (str), branch (str), pull_request_info (dict)

omfit_classes.omfit_github.post_comment_to_github(thread=None, comment=None, org=None, fork=None, repository=None, token=None)[source]

Posts a comment to a thread (issue or pull request) on GitHub. Requires setup of a GitHub token to work.

This function may be tested on fork=’gafusion’, thread=3208 if needed.

Setup:

1. Create a GitHub token with the "repo" (Full control of private repositories) box checked.
   See https://github.com/settings/tokens .

2. [Optional] Safely store the token to disk by executing:
      set_OMFIT_GitHub_token(token)
   This step allows you to skip passing the token to the function every time.
Parameters
  • thread – int [optional] The number of the issue or pull request within the fork of interest If not supplied, the current branch name will be used to search for open pull requests on GitHub.

  • comment – string The comment to be posted

  • org – string [optional] Leave this as gafusion to post to the main OMFIT repo. Enter something else to post on a fork.

  • fork – string [optional] Redundant with org. Use org instead. Fork is provided for backwards compatibility

  • repository – string [optional] The name of the repo on GitHub. If None, attempts to lookup with method based on git rev-parse. Falls back to OMFIT-source on failure. You should probably leave this as None unless you’re doing some testing, in which case you may use the regression_notifications repository under gafusion.

  • token – string or None Token for accessing GitHub None triggers attempt to decrypt from $GHUSER@token.github.com credential file Must be set up in advance with set_OMFIT_GitHub_token() function

Returns

response instance As generated by requests. It should have a status_code attribute, which is normally int(201) for successful GitHub posts and probably 4xx for failures.

omfit_classes.omfit_github.find_gh_comments(thread=None, contains='automatic_regression_test_post', user=False, id_only=True, org=None, repository=None, **kw)[source]

Looks up comments on a GitHub issue or pull request and searches for ones with body text matching contains

Parameters
  • thread – int or None int: issue or pull request number None: look up pull request number based on active branch name. Only works if a pull request is open.

  • contains – string or list of strings Check for these strings within comment body text. They all must be present.

  • user – bool or string [optional] True: only consider comments made with the current username (looks up GITHUB_username from MainSettings) string: only comments made by the specified username.

  • id_only – bool Return only the comment ID numbers instead of full dictionary of comment info

  • org – string [optional] The organization that the repo is under, like ‘gafusion’. If None, attempts to lookup with method based on git rev-parse. Falls back to gafusion on failure.

  • repository – string [optional] The name of the repo on GitHub. If None, attempts to lookup with method based on git rev-parse. Falls back to OMFIT-source on failure.

  • **kw – keywords to pass to OMFITgithub_paged_fetcher

Returns

list of dicts (id_only=False) or list of ints (id_only=True)

omfit_classes.omfit_github.delete_matching_gh_comments(thread=None, keyword=None, test=True, token=None, org=None, repository=None, quiet=False, exclude=None, exclude_contain=None, match_username=None, **kw)[source]

Deletes GitHub comments that contain a keyword. Use CAREFULLY for clearing obsolete automatic test report posts.

Parameters
  • thread – int [optional] Supply issue or comment number or leave as None to look up an open pull request # for the active branch

  • keyword – string or list of strings CAREFUL! All comments which match this string will be deleted. If a list is provided, every substring in the list must be present in a comment.

  • test – bool Report which comments would be deleted without actually deleting them.

  • token – string or None Token for accessing GitHub None triggers attempt to decrypt from $GHUSER@token.github.com credential file Must be set up in advance with set_OMFIT_GitHub_token() function

  • org – string [optional] The organization that the repo is under, like ‘gafusion’. If None, attempts to lookup with method based on git rev-parse. Falls back to gafusion on failure.

  • repository – string [optional] The name of the repo on GitHub. If None, attempts to lookup with method based on git rev-parse. Falls back to OMFIT-source on failure.

  • quiet – bool Suppress print output

  • exclude – list of strings [optional] List of CIDs to exclude / protect from deletion. In addition to actual CIDs, the special value of ‘latest’ is accepted and will trigger lookup of the matching comment with the most recent timestamp. Its CID will replace ‘latest’ in the list.

  • exclude_contain – list of strings [optional] If provided, comments must contain all of the strings listed in their body in order to qualify for exclusion.

  • match_username – bool or string [optional] True: Only delete comments that match the current username. string: Only delete comments that match the specified username.

  • **kw – keywords to pass to find_gh_comments()

Returns

list of responses from requests (test=False) or list of dicts with comment info (test=True) response instances should have a status_code attribute, which is normally int(201) for successful GitHub posts and probably 4xx for failures.

omfit_classes.omfit_github.edit_github_comment(comment_mark=None, separator='---', new_content=None, mode='replace_between', thread=None, org=None, repository=None, token=None)[source]

Edits GitHub comments to update automatically generated information, such as regression test reports.

Parameters
  • comment_mark – str or None None: edit top comment. str: Search for a comment (not including top comment) containing comment_mark as a substring.

  • new_content

    str New content to put into the comment

    Special cases: If content is None and mode == ‘replace_between’:

    Separate close separator and close separator present in target: del between 1st open sep and next close sep Separate close separator & close separator not present in target: del everything after 1st open sep All same separator & one instance present in target: delete it and everything after All same separator & multiple instances present: delete the first two and everything in between.

    If content is None and mode != ‘replace_between’, raise ValueError If None and mode == ‘replace_between’, but separator not in comment, abort but do not raise.

  • separator – str or list of strings Substring that separates parts that should be edited from parts that should not be edited. ‘—’ will put a horizontal rule in GitHub comments, which seems like a good idea for this application. If this is a list, the first and second elements will be used as the opening and closing separators to allow for different open/close around a section.

  • mode

    str Replacement behavior. ‘replace’: Replace entire comment. Ignores separator. ‘append’: Append new content to old comment. Places separator between old and new if separator is supplied.

    Closes with another separator if separate open/close separators are supplied; otherwise just places one separator between the original content and the addition.

    ’replace_between: Replace between first & second appearances of separator (or between open & close separators).

    Raises ValueError if separator is not specified. Acts like mode == ‘append’ if separator (or opening separator) is not already present in target comment.

    other: raises ValueError

  • thread – int [optional] Issue or pull request number. Looked up automatically if not provided

  • org – str [optional] GitHub org. Determined automatically if not supplied.

  • repository – str [optional] GitHub repository. Determined automatically if not supplied.

  • token – str Token for accessing GitHub. Decrypted automatically if not supplied.

Returns

response instance or None None if aborted before attempting to post, otherwise response instance, which is an object generated by the requests module. It should have a status_code attribute which is 2** on success and often 4** for failures.

omfit_classes.omfit_github.set_gh_status(org=None, repository=None, commit=None, state=None, context='regression/auto', description='', target_url=None, destination_org=None)[source]

Updates the status of a pull request on GitHub. Appears as green check mark or red X at the end of the thread.

Parameters
  • org – string [optional] The organization that the repo is under, like ‘gafusion’. If None, attempts to lookup with method based on git rev-parse. Falls back to gafusion on failure.

  • destination_org – string [optional] Used for cross-fork pull requests: specify the destination org of the pull request. The pull request actually exists on this org, but it is not where the source branch lives. Passed to get_pull_request_number when determining whether a pull request is open. Defines first org to check. If None it defaults to same as org

  • repository – string [optional] The name of the repo on GitHub. If None, attempts to lookup with method based on git rev-parse. Falls back to OMFIT-source on failure.

  • commit

    commit hash or keyword ‘latest’ or ‘HEAD~0’:

    Look up latest commit. This is appropriate if you have reloaded modules and classes as needed.

    ’omfit’ or None:

    use commit that was active when OMFIT launched. Appropriate if testing classes as loaded during launch and not reloaded since.

    else: treats input as a commit hash and will fail if it is not one.

  • state – string or bool ‘success’ or True: success -> green check ‘failure’ or False: problem -> red X ‘pending’ -> yellow circle

  • context – string Match the context later to update the status

  • description – string A string with a description of the status. Up to 140 characters. Long strings will be truncated. Line breaks, quotes, parentheses, etc. are allowed.

  • target_url – string URL to associate with the status

Returns

response instance s generated by requests. It should have a status_code attribute, which is normally int(201) for successful GitHub posts and probably 4xx for failures.

omfit_gks

class omfit_classes.omfit_gks.OMFITgksout(*args, **kwargs)[source]

Bases: omfit_classes.omfit_nc.OMFITnc

plot(asd=None)[source]

omfit_gnuplot

class omfit_classes.omfit_gnuplot.OMFITgnuplot(*args, **kwargs)[source]

Bases: omfit_classes.omfit_ascii.OMFITascii, omfit_classes.sortedDict.SortedDict

Class used to parse ascii gnuplot data files Gnuplot data indices are split by double empty lines Gnuplot data blocks are split by a single empty line

Parameters
  • filename – filename passed to OMFITobject class

  • comment – character

  • **kw – keyword dictionary passed to OMFITobject class

load()[source]
plot(ix=0, iy=1, **kw)[source]

Plot individual data-sets

Parameters
  • ix – index of the data column to be used for the X axis

  • iy – index of the data column to be used for the Y axis

  • **kw – parameters passed to the matplotlib plot function

omfit_google_sheet

Tools for interfacing with Google sheets

Compared to SQL tables, Google sheets allow for formatting and other conveniences that make them easier to update by hand (appropriate for some columns like labels, comments, etc.), but the API for Google sheets is relatively simple. This file is meant to provide tools for looking up data by column header instead of column number/letter and by shot instead of row number. That is, the user should not have to know which row holds shot 123456 or which column holds density in order to get the average density for shot 123456.

In this script, the purely numeric way to refer to cell ‘A1’ is (0, 0). Do not allow any reference to A1 as (1, 1) to creep into this code or it will ruin everything. Referring to (‘A’, 1) is fine; there’s a function for interpreting cell references, but it uses the presence of a letter in the column as a cue that the row should be counted from 1, so it can’t handle counting columns from 1. Some packages DO use numbers (from 1) instead of indices (from 0) internally. If one of these packages is used, its returned values must be converted immediately. YOU WON’T LIKE WHAT HAPPENS IF YOU FAIL TO MAINTAIN THIS DISCIPLINE.

class omfit_classes.omfit_google_sheet.OMFITgoogleSheet(*args, **kwargs)[source]

Bases: omfit_classes.omfit_base.OMFITtree

Connects to Google sheets and provides some convenience features

  • Lookup columns by column header name instead of column number

  • Local caching to allow repeated read operations without too many connections to the remote sheet.

  • Throttle to avoid throwing an exception due to hitting Google’s rate limiter, or pygsheet’s heavy-handed rate limit obedience protocol.

An example sheet that is compatible with the assumptions made by this class may be found here: https://docs.google.com/spreadsheets/d/1MJ8cFjFZ2pkt4OHWWIciWT3sM2hSADqzG78NDiqKgkU/edit?usp=sharing

A sample call that should work to start up an OMFITgoogleSheet instance is: >> gsheet = OMFITgoogleSheet( >> keyfile=os.sep.join([OMFITsrc, ‘..’, ‘samples’, ‘omfit-test-gsheet_key.json’]), >> sheet_name=’OmfitDataSheetTestsheet’, >> subsheet_name=’Sheet1’, # Default: lookup first subsheet >> column_header_row_idx=5, # Default: try to guess >> units_row_idx=6, # Default: None >> data_start_row_idx=7, # Default: header row + 1 (no units row) or + 2 (if units row specified) >> ) This call should connect to the example sheet. This is more than an example; this is a functional call that is read out of the docstring by the regression test and testing will fail if it doesn’t work properly.

Parameters
  • filename – str Not used, but apparently required when subclassing from OMFITtree.

  • keyfile – str or dict-like Filename with path of the file with authentication information, or dict-like object with the parsed file contents (OMFITjson should work well). See setup_help for help setting this up.

  • sheet_name – str Name of the Google sheets file/object/whatever to access

  • subsheet_name – str Sheet within the sheet (the tabs at the bottom). Defaults to the first tab.

  • column_header_row_idx – int Index (from 0) of the row with column headers. If not specified, we will try to guess for you. Indices like this are stored internally.

  • column_header_row_number – int Number (from 1) of the row with column headers, as it appears in the sheet. Ignored if column_header_row_idx is specified. If neither is specified, we will try to guess for you. This will be converted into an index (from 0) for internal use.

  • units_row_idx – int or None Index (from 0) of the row with the units, if there is one, or None if there isn’t a row for units.

  • units_row_number – int or None Number (from 1) of the units row. See description of column_header_row_number.

  • data_start_row_idx – int Index (from 0) of the row where data start. Defaults to column_header_row + 1 if units_row is None or +2 if units_row is not None.

  • data_start_row_idx – int Number (from 1) of the first row of data after the header. See description of column_header_row_number.

  • kw – additional keywords passed to super class’s __init__

record_request()[source]

Logs a request to the Google API so the rate can be avoided by local throttling

self_check(essential_only=False)[source]

Makes sure setup is acceptable and raises AssertionError otherwise

Parameters

essential_only – bool Skip checks that aren’t essential to initializing the class and its connection. This avoids spamming warnings about stuff that might get resolved later in setup.

authorize_client()[source]

Deal with keyfile being allowed to be either a string or a dict-like

Returns

pygsheets.client.Client instance

connect(attempt=0)[source]

Establishes connection to google sheet

sheet_connection()[source]

Attempt to connect on demand and return a Worksheet object

Returns

Worksheet instance

post_connect()[source]

Fill in some handy information; usually desired after connecting

static help()[source]

Prints help

direct_write(value, *args)[source]

Writes data to a single cell in the remote sheet

Parameters
  • value – numeric or string This will be written to the cell

  • args – address information, such as (column, row) or address string. Examples: (‘A’, 1), (0, 0), ‘A1’. See interpret_row_col() for more information about how to specify row and column.

direct_read(*args, formatted=False)[source]

Reads data from a single cell in the remote sheet

Parameters
  • args – passed to interpret_row_col Provide address information here, like (‘A’, 1), (0, 0), or ‘A1’

  • formatted – bool Get the formatted value (floats will be rounded according to sheet display settings)

Returns

str or number If formatted = True: str

Might contain something that can be trivial cast as int or float, but it’s always in a string

If formatted = False: str or number

With formatting off, numbers may be returned

time_until_throttle_okay()[source]

Checks history of previous requests and decides how long we must wait before making a new request

obey_rate_limit()[source]

Waits as needed to avoid rate limit

throttled_write(value, *args)[source]

Checks whether it’s okay to make a request and waits if necessary to avoid an error

Parameters
  • value – numeric or string

  • args – arguments to pass to direct_read()

throttled_read(*args, **kw)[source]

Checks whether it’s okay to make a request and waits if necessary to avoid an error

Parameters
  • args – arguments to pass to direct_read()

  • kw – keywords to pass to direct_read()

Returns

value returned from direct_read()

guess_header_row()[source]

Tries to determine which row holds column headers based on the presence of likely column names

get_columns()[source]

Gets a list of column names from the header row

find_column(column_name, quiet=False)[source]

Finds the column index corresponding to a column name

Parameters
  • column_name – str Column header name (should be in row given by column_header_row)

  • quiet – bool No printing

Returns

int (success) or None (failed to find column) Column index (from 0; not number from 1)

update_cache(force=True)[source]

Updates the cached representation if forced or if it seems like it’s a good idea

If it’s been a while since the cache was updated, do another update. If we know there was a write operation since the last update, do another update. If the cache is missing, grab the data and make the cache. If force=True, do another update.

Parameters

force – bool Force an update, even if other indicators would lead to skipping the update

get_column(column, force_update_cache=False, disable_cache=False)[source]

Gets data from a column

By default, the local cache will be checked & updated if needed and then data will be read from the cache. Caching can be disabled, which will result in more connections to the remote sheet. Some other methods have been programmed assuming that local caching done here will save them from otherwise inefficient layout of calls to this function.

Parameters
  • column – int or str Column index (from 0) or column letter (like ‘A’, ‘B’, … ‘ZZ’)

  • force_update_cache – bool Force the cache to update before reading

  • disable_cache – bool Don’t go through the local cache; read the remote column directly

Returns

list Data from the column

get_column_by_name(column_name, **kw)[source]

Gets data from a column by name

Parameters
  • column_name – str Name of the column in the header row

  • kw – additional parameters passed to get_column

Returns

list Data from the column

get_row(row_idx, force_update_cache=False, disable_cache=False)[source]

Gets data from a row

By default, cached results are used after updating the local cache as needed (see get_column for details).

Parameters
  • row_idx – int Row index (from 0)

  • force_update_cache – bool Force the cache to update before reading

  • disable_cache – bool Don’t go through the local cache; read the remote row directly

Returns

array Data from the row

class omfit_classes.omfit_google_sheet.OMFITexperimentTable(*args, **kwargs)[source]

Bases: omfit_classes.omfit_google_sheet.OMFITgoogleSheet

Extends OMFITgoogleSheet by assuming each row corresponds to a shot, allowing more features to be provided.

This is less general (now Shot must exist), but it may be more convenient to look up data by shot this way. With more assumptions about the structure of the sheet, more methods can be implemented.

Many methods go further to assume that there is a column named ‘Device’ and that there are two columns that can be used to determine a time range. Parts of the class should work on sheets without this structure, but others will fail.

An example sheet that is compatible with the assumptions made by this class may be found here: https://docs.google.com/spreadsheets/d/1MJ8cFjFZ2pkt4OHWWIciWT3sM2hSADqzG78NDiqKgkU/edit?usp=sharing

A sample call that should work to start up an OMFITgoogleSheet instance is: >> xtable = OMFITexperimentTable( >> keyfile=os.sep.join([OMFITsrc, ‘..’, ‘samples’, ‘omfit-test-gsheet_key.json’]), >> sheet_name=’OmfitDataSheetTestsheet’, >> subsheet_name=’Sheet1’, # Default: lookup first subsheet >> column_header_row_idx=5, # Default: try to guess >> units_row_idx=6, # Default: search for units row >> data_start_row_idx=7, # Default: find first number after the header row in the shot column >> ) This call should connect to the example sheet. These data are used in regression tests and should be updated If the test sheet is changed.

Parameters

kw – passed to parent class: OMFITgoogleSheet

post_connect()[source]

Do some inspection on the sheet after connecting to it

self_check(essential_only=False)[source]

Makes sure setup is acceptable and raises AssertionError otherwise

Parameters

essential_only – bool Skip checks that aren’t essential to initializing the class and its connection. This avoids spamming warnings about stuff that might get resolved later in setup.

guess_header_row()[source]

Tries to determine which row holds column headers

This is a simplification of the parent class’s methods, because here we explicitly require a ‘Shot’ column. It should be faster and more reliable this way.

find_units_row()[source]

Recommended: put a row of units under the column headers (data won’t start on the next row after headers)

Were recommendations followed? See if 1st row under headers has “units” under a column named Shot, id, or ID. In this case, data start on the row after units.

This function is easier to implement under OMFITexperimentTable instead of OMFITgoogleSheet because we are allowed to make additional assumptions about the content of the table.

If there is no cell containing just ‘units’, ‘Units’, or ‘UNITS’, but you do have a row for units, you should specify units_row at init.

Returns

int Index (from 0) of the row with units

find_data_start_row()[source]

Determine the first row of data.

Allowing this to be different from header + 1 means meta data can be inserted under the column headers.

Since this class imposes the requirement that the “Shot” column exists & has shot numbers, we can find the first valid number in that column, after the header row, and be safe in assuming that the data start there.

To allow for nonstandard configurations of blank cells, please specify data_start_row manually in init.

Returns

int Index (from 0) of the row where data start.

extract_column(column, pad_with='', force_type=None, fill_value=nan, fill_if_not_found='not found')[source]

Downloads data for a given column and returns them with units

Parameters
  • column – str or int str: Name of the column; index will be looked up int: Index of the column, from 0. Name of the column can be looked up easily.

  • pad_with – str Value to fill in to extend the column if it is too short. Truncation of results to the last non-empty cell is possible, meaning columns can have inconsistent lengths if some have more empty cells than others.

  • force_type – type [optional] If provided, pass col_values through list_to_array_type() to force them to match force_type.

  • fill_value – object Passed to list_to_array_type(), if used; used to fill in cells that can’t be forced to force_type. It probably would make sense if this were the same as pad_with, but it doesn’t have to be. It does have to be of type force_type, though.

  • fill_if_not_found – str Value to fill in if data are not found, such as if column_name is None.

Returns

(array, str or None) values in the column, padded or cut to match the number of shots

replaced by fill_if_not_found in case of bad column specifications replaced by fill_value for individual cells that can’t be forced to type force_type extended by pad_with as needed

units, if available or None otherwise

interpret_timing(**signal)[source]

Interprets timing information in the signal dictionary for write_mds_data_to_table().

Either tmin and tmax or time and dt are required to be in the dictionary as keys. See write_mds_data_to_table() documentation for more information.

Parameters

signal – Just pass in one of the dictionaries in the list of signals.

Returns

(float array, float array) Array of start times and array of end times in ms. One entry per row.

interpret_signal(i, device_servers, **signal)[source]

Interprets signal specification data for write_mds_data_to_table()

Parameters
  • i – int Row / shot index

  • device_servers – dict-like Dictionary listing alternative servers for different devices, like {‘EAST’: ‘EAST_US’}

  • signal – Just pass in one of the dictionaries in the list of signals.

Returns

tuple containing: server: str tree: str or None tdi: str factor: float tfactor: float device: str

write_mds_data_to_table(signals, overwrite=False, device_servers=None)[source]

Gets MDSplus data for a list of signals, performs operations on specified time ranges, & saves results.

This sample signal can be used as an instructional example or as an input for testing >> sample_signal_request = dict( >> column_name=’density’, # The sheet must have this column (happens to be a valid d3d ptname) >> TDI=dict(EAST=r’dfsdev’), # d3d’s TDI isn’t listed b/c TDI defaults to column_name >> treename=dict(EAST=’PCS_EAST’), # DIII-D isn’t listed b/c default is None, which goes to PTDATA >> factor={‘DIII-D’: 1e-13}, # This particular EAST pointname doesn’t require a factor; defaults to 1 >> tfactor=dict(EAST=1e3), # d3d times are already in ms >> tmin=’t min’, # str means read timing values from column w header exactly matching this string >> tmax=’t max’, >> ) This sample should work with the example/test sheet.

If the sheet in question had ONLY DIII-D data, the same signal request could be accomplished via: >> simple_sample_signal_request = dict(column_name=’density’, factor=1e-13, tmin=’t min’, tmax=’t max’) This sample won’t work with the test sheet; it will fail on the EAST shots. We are exploiting the shortcut that we’ve used a valid pointname (valid for d3d at least) as the column header. If you want fancy column names, you have to specify TDIs. We are also relying on the defaults working for this case.

The sample code in this docstring is interpreted and used by the regression test, so don’t break it. Separate different samples with non-example lines (that don’t start with >>)

Parameters
  • signals

    list of dict-likes Each item in this list should be a dict-like that contains information needed to fetch & process data. SIGNAL SPECIFICATION - column_name: str (hard requirement)

    Name of the column within the sheet

    • TDI: None, str, or dict-like (optional if default behavior (reuse column_name) is okay)

      None: reuse column_name as the TDI for every row str: use this str as the pointname/TDI for every row dict-like: keys should be devices. Each device can have its own pointname/TDI. The sheet must have a Device row. If a device is missing, it will inherit the column_name.

    • treename: None, str, or dict-like (optional if default behavior (d3d ptdata) is okay)

      None: PTDATA (DIII-D only) str: use this string as the treename for every row dict-like: device specific treenames. The sheet must have a Device row.

    • factor: float or dict-like (defaults to 1.0)

      float: multiply results by this number before writing dict-like: multiply results by a device specific number before writing (unspecified devices get 1)

    • tfactor: float or dict-like (defaults to 1.0)

      float: multiply times by this number to get ms dict-like: each device gets a different factor used to put times in ms (unspecified devices get 1)

    PROCESSING - operation: str (defaults to ‘mean’)

    Operation to perform on the gathered data. Options are: - ‘mean’ - ‘median’ - ‘max’ - ‘min’

    • tmin: float or str

      Start of the time range in ms; used for processing operations like average. Must be paired with tmax. A usable tmin/tmax pair takes precedence over time+dt. A float is used directly. A string triggers lookup of a column in the sheet; then every row gets its own number determined by its entry in the specified column.

    • tmax: float or str

      End of the time range in ms. Must be paired with tmin.

    • time: float or str

      Center of a time range in ms. Ignored if tmin and tmax are supplied. Must be paired with dt.

    • dt: float or str

      Half-width of time window in ms. Must be paired with time.

  • overwrite – bool Update the target cell even if it’s not empty?

  • device_servers – dict-like [optional] Provide alternative MDS servers for some devices. A common entry might be {‘EAST’: ‘EAST_US’} to use the ‘EAST_US’ (eastdata.gat.com) server with the ‘EAST’ device.

write_efit_result(pointnames=None, **kw)[source]

Wrapper for write_mds_data_to_table for quickly setting up EFIT signals.

Assumes that device-specific variations on EFIT pointnames and primary trees are easy to guess, and that you have named your columns to exactly match EFIT pointnames in their “pure” form (no leading ).

Basically, it can build the signal dictionaries for you given a list of pointnames and some timing instructions.

Here is a set of sample keywords that could be passed to this function: >> sample_kw = dict( >> pointnames=[‘betan’], >> tmin=’t min’, # Can supply a constant float instead of a string >> tmax=’t max’, >> overwrite=True, >> device_servers=dict(EAST=’EAST_US’), # Probably only needed for EAST >> ) These should work in xtable.write_efit_result where xtable is an OMFITexperimentTable instance connected to a google sheet that contains columns with headers ‘betan’, ‘t min’, and ‘t max’

:param pointnames; list of strings

Use names like betan, etc. This function will figure out whether you really need betan instead.

Parameters

kw

more settings, including signal setup customizations. * MUST INCLUDE tmin & tmax OR time & dt!!!!!!!!! <—- don’t forget to include timing data * Optional signal customization: operation, factor * Remaining keywords (other than those listed so far) will be passed to write_mds_data_to_table * Do not include column_name, TDI, treename, or tfactor, as these are determined by this function.

If you need this level of customization, just use write_mds_data_to_table() directly.

omfit_gpec

class omfit_classes.omfit_gpec.OMFITGPECbin(filename, lock=True, export_kw={}, **kwargs)[source]

Bases: omfit_classes.omfit_data.OMFITncDataset

OMFIT class used to interface with DCON and GPEC binary files.

Parameters
  • filename – String. File path (file name must start with original output name)

  • **kw – All additional key word arguments passed to OMFITobject

  • filename – Path to file

  • lock – Prevent in memory changes to the DataArray entries contained

  • exportDataset_kw – dictionary passed to exportDataset on save

  • data_vars – see xarray.Dataset

  • coords – see xarray.Dataset

  • attrs – see xarray.Dataset

  • **kw – arguments passed to OMFITobject

load()[source]

Parse GPEC binaries using the xdraw mapping for variable names and expected binary structures.

save()[source]

This method can detect if .filename was changed and if so, makes a copy from the original .filename (saved in the .link attribute) to the new .filename

class omfit_classes.omfit_gpec.OMFITGPECascii(filename, lock=True, export_kw={}, **kwargs)[source]

Bases: omfit_classes.omfit_data.OMFITncDataset, omfit_classes.omfit_ascii.OMFITascii

This class parses ascii tables in a very general way that catches most of the oddities in GPEC output file formatting (mostly inherited from DCON).

The data is stored as a Dataset in memory but has no save method, so the original file is never modified.

Parameters
  • filename – Path to file

  • lock – Prevent in memory changes to the DataArray entries contained

  • exportDataset_kw – dictionary passed to exportDataset on save

  • data_vars – see xarray.Dataset

  • coords – see xarray.Dataset

  • attrs – see xarray.Dataset

  • **kw – arguments passed to OMFITobject

load()[source]

Load ascii tables into memory.

save()[source]

This method can detect if .filename was changed and if so, makes a copy from the original .filename (saved in the .link attribute) to the new .filename

class omfit_classes.omfit_gpec.OMFITGPECnc(filename, lock=True, update_naming_conventions=True, export_kw={}, **kwargs)[source]

Bases: omfit_classes.omfit_data.OMFITncDataset

Child of OMFITncDataset, which is a hybrid xarray Dataset and OMFIT SortedDict object. This one updates the GPEC naming conventions when it is loaded, and locks the data by default so users don’t change the code outputs.

Parameters
  • filename – Path to file

  • lock – Prevent in memory changes to the DataArray entries contained

  • exportDataset_kw – dictionary passed to exportDataset on save

  • data_vars – see xarray.Dataset

  • coords – see xarray.Dataset

  • attrs – see xarray.Dataset

  • **kw – arguments passed to OMFITobject

load()[source]

Loads the netcdf file using the omfit_data importDataset function, then distributes those keys and attributes to this class.

Returns

class omfit_classes.omfit_gpec.OMFITldpi(filename, lock=True, export_kw={}, **kwargs)[source]

Bases: omfit_classes.omfit_data.OMFITncDataset

OMFIT class used to interface with L. Don Pearlstein’s inverse equilibrium code.

Parameters
  • filename – String. File path (file name must start with original output name)

  • **kw – All additional key word arguments passed to OMFITobject

  • filename – Path to file

  • lock – Prevent in memory changes to the DataArray entries contained

  • exportDataset_kw – dictionary passed to exportDataset on save

  • data_vars – see xarray.Dataset

  • coords – see xarray.Dataset

  • attrs – see xarray.Dataset

  • **kw – arguments passed to OMFITobject

load()[source]

Parse ldp_i binaries using the DCON read_eq_ldp_i as a guide for variable names and expected binary structures.

save()[source]

This method can detect if .filename was changed and if so, makes a copy from the original .filename (saved in the .link attribute) to the new .filename

plot(only2D=False, qcontour=False, levels=10, **kwargs)[source]

Quick plot of LDPI equilibrium.

Parameters
  • only2D – bool. Plot only the 2D contour of psi_n (or optionally q)

  • qcontour – bool. If true, levels of q are contoured in 2D plot (default is psi_n)

  • levels – int or np.ndarray. Number of (or explicit values of) levels to be contoured in 2D plot. None plots all levels in file

  • kwargs – All other key word arguments are passed to the matplotlib plot function

Returns

omfit_harvest

omfit_classes.omfit_harvest.ddb_float(float_in)[source]

Convert float to Decimal compatible with DynamoDB format

Parameters

float_in – input float

Returns

float in Decimal format

omfit_classes.omfit_harvest.harvest_send(*args, **kw)[source]

Function to send data to the harvesting server

Parameters
  • payload – dictionary with payload data

  • table – table where to put the data

  • host – harvesting server address If None take value from HARVEST_HOST environemental variable, or use default gadb-harvest.duckdns.org if not set.

  • port – port the harvesting server is listening on. If None take value from HARVEST_PORT environemental variable, or use default 0 if not set.

  • verbose – print harvest message to screen If None take value from HARVEST_VERBOSE environemental variable, or use default False if not set.

  • tag – tag entry If None take value from HARVEST_TAG environemental variable, or use default Null if not set.

  • protocol – transmission protocol to be ued (UDP or TCP) If None take value from HARVEST_PROTOCOL environemental variable, or use default UDP if not set.

  • process – function passed by user that is called on each of the payload elements prior to submission

Returns

tuple with used (host, port, message)

omfit_classes.omfit_harvest.harvest_nc(filename, entries=None, verbose=False)[source]

Function that returns data contained in a NetCDF3 file to be sent by harvest_send

Parameters
  • filename – NetCDF3 file

  • entries – subset of variables to loof for in the NetCDF file

  • verbose – print payload

Returns

payload dictionary

class omfit_classes.omfit_harvest.OMFITharvest(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict

load()[source]
class omfit_classes.omfit_harvest.OMFITharvestS3(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict

long_term_storage()[source]
list_items()[source]
get(key)[source]

Return the value for key if key is in the dictionary, else default.

load()[source]

Download the data and unpack the nested pickle files

get_array_form(var=None, datain=None)[source]

Iterate over the table rows to get the values of :param var: or all variables if :param var: is None

Parameters

var – Parameter to return (accumulated over all rows) or all variables if None

omfit_classes.omfit_harvest.dynamo_to_S3(table, pickle_main_directory='/home/fusionbot/harvest_database', upload=True, **kw)[source]

This function fetches data in a OMFITharvest databases and saves it as OMFITharvestS3 pickle files

Parameters
  • table – dynamodb table to query

  • pickle_main_directory – where to save the pickled files

  • upload – upload pickle files to harvest S3 server

  • **kw – keyword arguments passed to the OMFITharvest call

Returns

list of strings with filenames that were created

omfit_classes.omfit_harvest.upload_harvestS3pickle(filename)[source]

Upload pickle file to harvest S3

omfit_hdf5

class omfit_classes.omfit_hdf5.OMFIThdf5raw(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.startup_framework.OMFITobject

OMFIT class that exposes directly the h5py.File class

load()[source]
class omfit_classes.omfit_hdf5.OMFIThdf5(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.startup_framework.OMFITobject

OMFIT class that translates HDF5 file to python dictionary At this point this class is read only. Changes made to the its content will not be reflected to the HDF5 file.

load()[source]
update([E, ]**F)None.  Update D from dict/iterable E and F.[source]

If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]

omfit_classes.omfit_hdf5.dict2hdf5(filename, dictin, groupname='', recursive=True, lists_as_dicts=False, compression=None)[source]

Save hierarchy of dictionaries containing np-compatible objects to hdf5 file

Parameters
  • filename – hdf5 file to save to

  • dictin – input dictionary

  • groupname – group to save the data in

  • recursive – traverse the dictionary

  • lists_as_dicts – convert lists to dictionaries with integer strings

  • compression – gzip compression level

omfit_helena

class omfit_classes.omfit_helena.OMFIThelena(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

OMFIT class used to interface with Helena mapping file for ELITE

Parameters
  • filename – filename passed to OMFIThelena class

  • **kw – keyword dictionary passed to OMFITobject class, not currently used

load()[source]

Method used to load the content of the file specified in the .filename attribute

Returns

None

save()[source]

Method used to save the content of the object to the file specified in the .filename attribute

Returns

None

plot()[source]

Method used to plot all profiles, one in a different subplots

Returns

None

write_helena_mapping_lines(file, mat, title='', n=5)[source]

Writes a matrix in the HELENA mapping line format

: param file : file to write the data

: param mat: Matrix to be written, can be 1 or 2 dimensional

: title : title for the data

: param n : number of columns

read_nlines_and_split(file, n, read_title=True)[source]
read_2d_array(file, n1, n2)[source]
Helena_to_pFile(pFilename='')[source]

Translate OMFIThelena class to pfile data structure

Parameters

pFilename – pFile to which Helena data is overwritten

Returns

pfile OMFITpFile structure that has ne, te and ti elements

class omfit_classes.omfit_helena.OMFITmishkamap(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

OMFIT class used to interface with the mapping file for MISHKA

Parameters
  • filename – filename passed to OMFITmishkamap class

  • **kw – keyword dictionary passed to OMFITobject class, not currently used

load()[source]

Method used to load the content of the file specified in the .filename attribute

Returns

None

save()[source]

Method used to save the content of the object to the file specified in the .filename attribute

The variables need to be written in particular order and format for MISHKA.

Returns

None

plot()[source]

Method used to plot all profiles, one in a different subplots

Returns

None

write_mishka_mapping_lines(file, mat, n=4, endline=True)[source]

Writes a matrix in the MISHKA mapping line format

: param file : file to write the data

: param mat: Matrix to be written, can be 1 or 2 dimensional

: title : title for the data

: param n : number of columns

read_nlines_and_split(file, n)[source]
read_2d_array(file, n1, n2, columns=4)[source]
class omfit_classes.omfit_helena.OMFIThelenaout(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

OMFIT class used to interface with Helena output file

Parameters
  • filename – filename passed to OMFITobject class

  • **kw – keyword dictionary passed to OMFITobject class

load()[source]

Method used to load the content of the file specified in the .filename attribute

Returns

None

plot()[source]

Method used to plot all profiles, one in a different subplots

Returns

None

read_1d_vectors(f, dataident)[source]

Method to read 1D vectors from HELENA output file. param f: File to read the data. It is assumed that the file is at

the right position to start reading

param dataident: a list containing 4 elements:

[0] : names of the data to be read. The global 1d dictionary will use these names. [1] : The column indicating the location of the psinorm vector [2] : The exponent needed to produce psinorm :

1 = the data in file already is already psinorm 2 = the data is in sqrt(psinorm)

[3] : Column numbers for the data [4] : A string indicating the end of data

read_matrix(f, end='*', separator=' ')[source]

Method that reads a 2D ASCII matrix and turns it into a np array Reads until the end string is found

update_aEQDSK(aEQDSK)[source]
p_jz_q_plot()[source]
class omfit_classes.omfit_helena.OMFIThelenainput(*args, **kwargs)[source]

Bases: omfit_classes.omfit_namelist.OMFITnamelist

Class for the input namelist for HELENA

profiles_from_pfile(pfile)[source]

Reconstructs Te, Ne, Ti and Zeff profiles from a pFile

read_from_eqdsk(EQDSK, boundary_flx=0.996, nf=256, boundary_plot=True, bmultip=1.0, boundary=None, symmetric=False)[source]
read_from_gato(dskgato, nf=256, bmultip=1.0, dskbalfile=None)[source]

Reads a dskgato file, extracts the global parameters and the boundary shape from that. Calculates HELENA parameters and reconstructs the boundary using the fourier representation.

omfit_idl

class omfit_classes.omfit_idl.OMFITidl(*args, **kwargs)[source]

Bases: omfit_classes.namelist.NamelistFile, omfit_classes.omfit_ascii.OMFITascii

OMFIT class used to interface with IDL language files (with only declarations in it)

Parameters
  • filename – filename passed to OMFITobject class

  • **kw – keyword dictionary passed to OMFITascii class

load()[source]

Method used to load the content of the file specified in the .filename attribute

Returns

None

save()[source]

Method used to save the content of the object to the file specified in the .filename attribute

Returns

None

omfit_idlSav

class omfit_classes.omfit_idlSav.OMFITidlSav(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.startup_framework.OMFITobject

OMFIT class used to interface with IDL .sav files Note that these objects are “READ ONLY”, meaning that the changes to the entries of this object will not be saved to a file. This class is based on a modified version of the idlsave class provided by https://github.com/astrofrog/idlsave The modified version (omfit/omfit_classes/idlsaveOM.py) returns python dictionaries instead of np.recarray objects

Parameters
  • filename – filename passed to OMFITobject class

  • **kw – keyword dictionary passed to OMFITobject class

load()[source]

Method used to load the content of the file specified in the .filename attribute

Returns

None

omfit_ini

class omfit_classes.omfit_ini.OMFITiniSection(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict

translate(key=None)[source]
class omfit_classes.omfit_ini.OMFITini(*args, **kwargs)[source]

Bases: omfit_classes.omfit_ini.OMFITiniSection, omfit_classes.omfit_ascii.OMFITascii

OMFIT class used to interface with configuration files (INI files). This types of files are used by codes such as IMFIT and IPS

This class is based on the configobj class https://configobj.readthedocs.io/en/latest/index.html

Parameters
  • filename – filename passed to OMFITascii class

  • **kw – keyword dictionary passed to OMFITascii class

load()[source]

Method used to load the content of the file specified in the .filename attribute

Returns

None

save()[source]

Method used to save the content of the object to the file specified in the .filename attribute

Returns

None

omfit_interface

class omfit_classes.omfit_interface.OMFITinterface(*args, **kwargs)[source]

Bases: omfit_classes.omfit_interface.OMFITinterfaceAdaptor

Parameters
  • data – input data to be read by the adaptor

  • adaptor – adaptor class to be used

  • adaptorsFile – adaptor class to be used (defaults to ‘OMFIT-source/omfit/adaptors.py’)

  • t – t at which to return data

  • r – r at which to return data

  • squeeze – string with dimensions (‘r’,’t’,’rt’) to squeeze if length is one

  • checkAdaptors – check adaptors for inconsistencies (only if adaptor keyword is not passed)

static getAdaptors(adaptorsFile=None)[source]
property type

str(object=’’) -> str str(bytes_or_buffer[, encoding[, errors]]) -> str

Create a new string object from the given object. If encoding or errors is specified, then the object must expose a data buffer that will be decoded using the given encoding and error handler. Otherwise, returns the result of object.__str__() (if defined) or repr(object). encoding defaults to sys.getdefaultencoding(). errors defaults to ‘strict’.

definitions(key, data)[source]
get(item, dim, t=None, r=None, interp=None, squeeze=None)[source]

Return the value for key if key is in the dictionary, else default.

static checkInterfaceAdaptors(namespace=None, checkList=['units', 'description'], quiet_if_ok=True)[source]
omfit_classes.omfit_interface.interface(data, *args, **kw)[source]

omfit_ionorb

class omfit_classes.omfit_ionorb.OMFITionorbConfig(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

OMFIT class used to read/write IONORB configuration file

Parameters
  • filename – filename

  • kw – additional parameters passed to the OMFITascii class

load()[source]
save()[source]

The save method is supposed to be overridden by classes which use OMFITobject as a superclass. If left as it is this method can detect if .filename was changed and if so, makes a copy from the original .filename (saved in the .link attribute) to the new .filename

class omfit_classes.omfit_ionorb.OMFITionorbBirth(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

OMFIT class used to read/write IONORB birth list or profiles

Parameters
  • filename – filename

  • kw – additional parameters passed to the OMFITascii class

columns = ['R[m]', 'Phi[Rad]', 'z[m]', 'v_R[m/s]', 'v_phi[m/s]', 'v_z[m/s]']
load()[source]
save2()[source]
class omfit_classes.omfit_ionorb.OMFITionorbHits(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

OMFIT class used to read the IONORB hits file

Parameters
  • filename – filename

  • kw – additional parameters passed to the OMFITascii class

columns = ['ID', 'Time[s]', 'Step', 'Wall_Idx', 'R[m]', 'Phi[Rad]', 'z[m]', 'v_R[m/s]', 'v_phi[m/s]', 'v_z[m/s]']
load()[source]
plot(**kw)[source]
class omfit_classes.omfit_ionorb.OMFITionorbFull(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

OMFIT class used to read the Full Orbit file output by the ionorbiter code

Parameters
  • filename – filename

  • kw – additional parameters passed to the OMFITascii class

columns = ['Time[s]', 'Step', 'R[m]', 'Phi[Rad]', 'z[m]']
load()[source]

omfit_jscope

class omfit_classes.omfit_jscope.OMFITjscope(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

OMFIT class used to interface with jScope save files

Parameters
  • filename – filename passed to OMFITascii class

  • **kw – keyword dictionary passed to OMFITascii class

load()[source]
server()[source]

Figure out server to connect to

Returns

server name or machine name

treename(item, yname)[source]

Figure out threename to use

Parameters
  • item – thing to plot

  • ynamey or y_expr_1

Returns

plot(shot=None)[source]

Plot signals

Parameters

shot – shot number

Returns

dictionary with all axes indexed by a tuple indicating the row and column

signal_treename(item)[source]
class omfit_classes.omfit_jscope.OMFITdwscope(*args, **kwargs)[source]

Bases: omfit_classes.omfit_jscope.OMFITjscope

omfit_jsolver

class omfit_classes.omfit_jsolver.OMFITjsolver(*args, **kwargs)[source]

Bases: omfit_classes.omfit_nc.OMFITnc

Class used to parse the ouput of the jsolver equilibirum code

to_geqdsk(B0, R0, ip, resolution, shot=0, time=0, RBFkw={})[source]

maps jsolver solution to a gEQDSK file

Parameters
  • B0 – scalar vacuum B toroidal at R0

  • R0 – scalar R where B0 is defined

  • ip – toroidal current

  • resolution – g-file grid resolution

  • shot – used to set g-file string

  • time – used to set g-file string

  • RBFkw – keywords passed to internal Rbf interpolator

Returns

OMFITgeqdsk object

omfit_json

class omfit_classes.omfit_json.OMFITjson(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

OMFIT class to read/write json files

OMFIT class to parse json files

Parameters
  • filename – filename of the json file

  • use_leading_comma – whether commas whould be leading

  • add_top_level_brackets – whether to add opening { and closing } to string read from file

  • **kw – arguments passed to __init__ of OMFITascii

baseDict

alias of omfit_classes.sortedDict.SortedDict

property use_leading_comma
property add_top_level_brackets
load()[source]
save()[source]

The save method is supposed to be overridden by classes which use OMFITobject as a superclass. If left as it is this method can detect if .filename was changed and if so, makes a copy from the original .filename (saved in the .link attribute) to the new .filename

class omfit_classes.omfit_json.OMFITsettings(*args, **kwargs)[source]

Bases: omfit_classes.omfit_json.OMFITjson

OMFIT class to read/write modules settings

OMFIT class to parse json files

Parameters
  • filename – filename of the json file

  • use_leading_comma – whether commas whould be leading

  • add_top_level_brackets – whether to add opening { and closing } to string read from file

  • **kw – arguments passed to __init__ of OMFITascii

baseDict

alias of omfit_classes.omfit_json.SettingsName

load()[source]

load as json and if it fails, load as namelist

class omfit_classes.omfit_json.SettingsName(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict

Class used for dict-types under OMFITsettings

baseDict

alias of omfit_classes.omfit_json.SettingsName

omfit_kepler

class omfit_classes.omfit_kepler.OMFITkeplerParams(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

OMFIT class used to interface with kepler input files

Parameters
  • filename – filename passed to OMFITascii class

  • **kw – keyword dictionary passed to OMFITascii class

load()[source]
save()[source]

The save method is supposed to be overridden by classes which use OMFITobject as a superclass. If left as it is this method can detect if .filename was changed and if so, makes a copy from the original .filename (saved in the .link attribute) to the new .filename

omfit_latex

class omfit_classes.omfit_latex.OMFITlatex(*args, **kwargs)[source]

Bases: omfit_classes.omfit_base.OMFITtree

Class used to manage LaTeX projects in OMFIT.

For testing, please see: samples/sample_tex.tex, samples/sample_bib.bib, samples/placeholder.jpb, and samples/bib_style_d3dde.bst .

Parameters
  • main – string Filename of the main .tex file that will be compiled.

  • mainroot – string Filename without the .tex extension. Leave as None to let this be determined automatically. Only use this option if you have a confusing filename that breaks the automatic determinator, like blah.tex.tex (hint: try not to give your files confusing names).

  • local_build – bool Deploy pdflatex job locally instead of deploying to remote server specified in SETTINGS

  • export_path – string Your LaTeX project can be quickly exported to this path if it is defined. The project can be automatically exported after build if this is defined. Can be defined later in the settings namelist.

  • export_after_build – bool Automatically export after building if export_path is defined. Can be updated later in the settings namelist.

  • hide_style_files – bool Put style files in hidden __sty__ sub tree in OMFIT (they still deploy to top level folder on disk)

  • debug – bool Some data will be saved to OMFIT[‘scratch’]

  • filename – ‘directory/bla/OMFITsave.txt’ or ‘directory/bla.zip’ where the OMFITtree will be saved (if ‘’ it will be saved in the same folder of the parent OMFITtree)

  • only – list of strings used to load only some of the branches from the tree (eg. [“[‘MainSettings’]”,”[‘myModule’][‘SCRIPTS’]”]

  • modifyOriginal – by default OMFIT will save a copy and then overwrite previous save only if successful. If modifyOriginal=True and filename is not .zip, will write data directly at destination, which will be faster but comes with the risk of deleting a good save if the new save fails for some reason

  • readOnly – will place entry in OMFITsave.txt of the parent so that this OMFITtree can be loaded, but will not save the actual content of this subtree. readOnly=True is meant to be used only after this subtree is deployed where its fileneme says it will be. Using this feature could result in much faster projects save if the content of this tree is large.

  • quiet – Verbosity level

  • developerMode – load OMFITpython objects within the tree as modifyOriginal

  • serverPicker – take server/tunnel info from MainSettings[‘SERVER’]

  • remote – access the filename in the remote directory

  • server – if specified the file will be downsync from the server

  • tunnel – access the filename via the tunnel

  • **kw – Extra keywords are passed to the SortedDict class

prepare_settings()[source]
lookup_dirs_and_servers(overwrite=False)[source]

Looks up workdir, remotedir, tunnel, and server

Tries to link to module or top-level settings to get information.

Parameters

overwrite – bool Overwrite existing attributes. Otherwise, attributes with valid definitions will not be updated.

grab_project(main_dir=None, get_tex=False)[source]

Grabs project files during init or after build

Parameters
  • main_dir – string Directory with the files to gather

  • get_tex – bool Gather .tex files as well (only recommended during init)

export(export_path=None)[source]

This is a wrapper for .deploy() that uses the export_path in settings by default.

Parameters

export_path – string The path to which files should be exported.

load_sample()[source]

Loads sample LaTeX source files for use as a template, example, or test case

debug()[source]

Prints info about the project.

clear_aux()[source]

Deletes aux files, like .log, .aux, .blg, etc.

build_clean()[source]

Wrapper for build that calls clear_aux first and then calls build with clean keyword.

build_pdflatex()[source]

Wrapper for build that selects the pdflatex sequence.

build_bibtex()[source]

Wrapper for build that selects the bibtex sequence.

build_full()[source]

Wrapper for build that selects the full seqeunce.

check_main_file()[source]

Makes sure the file defined by self[‘settings’][‘main’] is present in inputs. Backup: check if self[‘settings’][‘mainroot’] might work.

Returns

bool main file is okay (self[‘settings’][‘main’] is present) or might be okay (self[‘settings’][‘mainroot’] is present).

search_for_tex_files()[source]
Returns

List of .tex files in self

build(clean_before_build=None, sequence=None)[source]
Parameters
  • clean_before_build – bool clear temporary workDir before deploying; None pulls value from settings.

  • sequence – string ‘full’, ‘pdflatex’, ‘bibtex’, ‘2pdflatex’; None pulls value from settings.

run()[source]

Shortcut for opening the output file. If the output file has not been generated yet, a build will be attempted. This lets the user easily open the output from the top level context menu for the OMFITlatex instance, which is convenient.

open_gui()[source]

omfit_mars

class omfit_classes.omfit_mars.OMFITmars(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.startup_framework.OMFITobject

Class used to interface with MARS results directory

Parameters
  • filename – directory where the MARS result files are stored

  • extra_files – Any extra files that should be loaded

  • Nchi – Poloidal grid resolution for real-space

read_RMZM(filename)[source]
read_JPLASMA(filename, JNORM=1.0)[source]

Read JPLASMA.OUT file corresponding to perturbed currents

Parameters
  • filename – name of file

  • JNORM – normalization

read_BPLASMA(filename, BNORM=1.0)[source]

Read BPLASMA.OUT file corresponding to perturbed magnetic field

Parameters
  • filename – name of file

  • BNORM – normalization

read_XPLASMA(filename, XNORM=1.0)[source]

Read XPLASMA.OUT file corresponding to perturbed plasma displacement

Parameters
  • filename – name of file

  • XNORM – normalization

read_RESULTS(filename)[source]
read_PROFEQ(filename)[source]

Read equilibrium quantities used for MARS calculation Returns profiles in physical units, not MARS normalization

Parameters

filename – name of MARS output file, usually PROFEQ.OUT

read_FREQS(filename)[source]

Reads all frequencies related to drift kinetic resonances

Parameters

filename – name of MARS output file, usually FREQUENCIES.OUT

read_TIMEEVOL(filename, ncase, NORM=False)[source]

Reads time evolution output from time-stepping runs

Parameters
  • filename – name of MARS output file, usually TIMEEVOL.OUT

  • ncase – input namelist variable defining type of MARS-* run

  • NORM – toggle normalization to physical units

read_dWk_den(filename)[source]

Reads perturbed kinetic energy density

Parameters

filename – name of MARS output file, usually DWK_ENERGY_DENSITY.OUT

read_PROFNTV(filename)[source]
read_TORQUENTV(filename)[source]

Read TORQUENTV.OUT file containing NTV torque densities

read_NUSTAR(filename)[source]

Read PROFNUSTAR.OUT file containing ion and electron effective collisionality

get_RZ()[source]

convert RM and ZM into R and Z real space co-ordinates

get_SurfS(rs, saveCarMadata=False)[source]

Generate MacSurfS ASCII file containing control surface for Equivalent Surface Current workflow :param rs: radius (r/a) of control surface picked from CHEASE vacuum mesh :param saveCarMadata: flag to save MacDataS for CarMa coupling

get_UnitVec(vacFlag=False, IIgrid=None)[source]

Get unit vectors e_s and e_chi and jacobian from real space R,Z :param vacFlag: flag to calculate metric elements in all domain (plasma+vacuum) :param IIgrid: specify the radial mesh index to calculate vectors within

get_X123()[source]

Inverse Fourier transorm of plasma displacement and projection along Chi and Phi

get_B123()[source]

Inverse Fourier transorm of perturbed magnetic field and calculate Bn

get_J123()[source]

Inverse Fourier transorm of perturbed current

get_Bcyl()[source]

Get B-field components in cylindrical coordinates

get_Xcyl()[source]

Get displacement components in cylindrical coordinates

get_dWk(NFIT=3)[source]

Calculate dWk components from DWK_ENERGY_DENSITY

Parameters

NFIT – integral of energy density skips the first NFIT+1 points

get_BatSurf(rsurf, kdR=False)[source]

Calculate normal and tangential components of BPLASMA on given surface (e.g. wall) Calculate Fourier decomposition of Bn,t,phi Method used in CarMa Forward Coupling :param rsurf: normalized radius of surface :param kdR: flag for alternative calculation of unit vectors (default=False)

load()[source]
plot_RMZM(fig=None)[source]

Plot RM-ZM grid :param fig: specify target figure

plot_RZ(fig=None)[source]

Plot R-Z grid :param fig: specify target figure

plot_SurfS(fig=None)[source]

Plot surface stored in [‘CouplingSurfS’] :param fig: specify target figure

plot_JPLASMA(Mmax=5, fig=None)[source]

Plot perturbed current components j1,j2,j3 The MARS variable “J*j” is shown, not the physical quantity “j” :param fig: specify target figure

plot_FREQUENCIES(fig=None)[source]

Plot frequencies related to drift kinetic resonances :param fig: specify target figure

plot_BPLASMA(Mmax=5, fig=None)[source]

Plot perturbed field components b1,b2,b3 The MARS variable “J*b” is shown, not the physical quantity “b” :param Mmax: upper poloidal harmonic for labeling :param fig: specify target figure

plot_XPLASMA(Mmax=5, fig=None)[source]

Plot plasma displacement components X1,X2,X3 :param Mmax: upper poloidal harmonic for labeling :param fig: specify target figure

plot_Xn(fig=None)[source]

Plot normal plasma displacement :param fig: specify target figure

plot_Bn(II=False, fig=None)[source]

Plot normal field perturbation :param II: plot up to II radial grid index (default is plasma boundary) :param fig: specify target figure

plot_BrzSurf(II=False, rsurf=False, fig=None)[source]

Plot Br Bz (cyl. coords.) along specified surface :param II: plot on II radial grid index (default is plasma boundary) :param rsurf: normalized radius of surface :param fig: specify target figure

plot_BrzAbs(fig=None)[source]

Plot |B| along poloidal angle at plasma surface :param fig: specify target figure

plot_BrzMap(fig=None)[source]

Plot |B| (R,Z) map inside plasma :param fig: specify target figure

plot_XrzMap(fig=None)[source]

Plot |X| (R,Z) map inside plasma :param fig: specify target figure

plot_dWkDen(fig=None)[source]

Plot total dWk energy density profiles :param fig: specify target figure

plot_dWkComp(fig=None)[source]

Plot dWk components (integral of energy density) :param fig: specify target figure

plot_BntpSurf(fig=None)[source]

Plot Bn, Bt, Bphi on surface calculated by get_BatSurf :param fig: specify target figure

plot_BntpSurfMm(fig=None)[source]

Plot poloidal Fourier harmonics of Bn, Bt, Bphi on surface calculated by get_BatSurf :param fig: specify target figure

plot_BatSurf3D(fig=None, rsurf=1.0, ntor=- 1)[source]

Plot Bn on surface calculated by get_BatSurf, normalized and reconstructed in (chi,phi) real space :param fig: specify target figure :param rsurf: normalized surface radius for plotting, default is plasma boundary :param n: toroidal mode number for inverse Fourier transorm in toroidal angle

plot_Bwarp(fig=None, rsurf=1.0)[source]

Visualize B-field perturbation on specific surface :param fig: specify target figure :param rsurf: normalized surface radius for plotting, default is plasma boundary

plot_Xwarp(fig=None, rsurf=1.0)[source]

Visualize displacement perturbation on specific surface :param fig: specify target figure :param rsurf: normalized surface radius for plotting, default is plasma boundary

plot_PROFNTV(fig=None)[source]

Plot “boundary”-frequencies between different NTV regimes :param fig: specify target figure

plot_NUSTAR(fig=None, kplot=2)[source]

Plot effective collisionality for ions and electrons :param fig: specify target figure :param kplot: =2 to use logscale, =1 to use linear y-axis

class omfit_classes.omfit_mars.OMFITnColumns(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

OMFIT class used to interface with n-columns input files

Parameters
  • filename – filename passed to OMFITascii class

  • **kw – keyword dictionary passed to OMFITascii class

load()[source]
save()[source]

The save method is supposed to be overridden by classes which use OMFITobject as a superclass. If left as it is this method can detect if .filename was changed and if so, makes a copy from the original .filename (saved in the .link attribute) to the new .filename

plot()[source]
class omfit_classes.omfit_mars.OMFITmarsProfile(*args, **kwargs)[source]

Bases: omfit_classes.omfit_mars.OMFITnColumns

omfit_matlab

class omfit_classes.omfit_matlab.OMFITmatlab(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.startup_framework.OMFITobject

OMFIT class used to interface with MATLAB .mat files up to version 7.2

This class makes use of the scipy.io.loadmat/savemat interface

Parameters
  • filename – filename passed to OMFITobject class

  • **kw – keyword dictionary passed to OMFITobject class

load()[source]

Method used to load the content of the file specified in the .filename attribute

save()[source]

Method used to save the content of the object to the file specified in the .filename attribute

Returns

None

omfit_matrix

class omfit_classes.omfit_matrix.OMFITmatrix(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_path.OMFITpath

OMFITmatrix leverages both xarray and pandas as an efficient way of storing matrices to file. Internally, the data is stored as an xarray.DataArray under self[‘data’].

Parameters
  • filename – path to file.

  • bin – def None, filetype is unknown, if True, NetCDF, if False, ASCII.

  • zip – def None, compression is unknown, if False, switched off, if True, on.

  • **kw – keywords for OMFITpath.__init__()

load(bin=None, zip=None, xrkw={}, **pdkw)[source]

https://xarray.pydata.org/en/stable/generated/xarray.open_dataarray.html https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html

Parameters
  • bin – def None, load through xarray first, then through pandas, if True, xarray only, if False, pandas only.

  • zip – def False, compression is switched off, if True, on.

  • xrkw – keywords for xarray.open_dataarray()

  • **pdkw – keywords for pandas.read_csv()

Return bin, zip

resulting values for binary, zipped.

save(bin=None, zip=None, xrkw={}, **pdkw)[source]

https://xarray.pydata.org/en/stable/generated/xarray.DataArray.to_netcdf.html https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_csv.html

Parameters
  • bin – def None, save through xarray first, then pandas, if True, xarray only, if False, pandas only.

  • zip – def False, compression is switched off, if True, on.

  • xrkw – keywords for xarray.to_netcdf()

  • **pdkw – keywords for pandas.to_csv()

Return bin, zip

resulting values for binary, zipped.

close(**kw)[source]
Parameters

**kw – keywords for self.save()

omfit_mds

class omfit_classes.omfit_mds.OMFITmdsValue(server, treename=None, shot=None, TDI=None, quiet=False, caching=True, timeout=- 1)[source]

Bases: dict, omfit_classes.omfit_mds.OMFITmdsObjects

This class provides access to MDS+ value and allows execution of any TDI commands. The MDS+ value data, dim_of units, error can be accessed by the methods defined in this class. This class is capable of ssh-tunneling if necessary to reach the MDSplus server. Tunneling is set based on OMFIT[‘MainSettings’][‘SERVER’]

Parameters
  • server – MDS+ server or Tokamak device (e.g. atlas.gat.com or DIII-D)

  • treename – MDS+ tree (None or string)

  • shot – MDS+ shot (integer or string)

  • TDI – TDI command to be executed

  • quiet – print if no data is found

  • caching – if False turns off caching system, else behaviour is set by OMFITmdsCache

  • timeout – int Timeout setting passed to MDSplus.Connection.get(), in ms. -1 seems to disable timeout. Only works for newer MDSplus versions, such as 7.84.8.

>> tmp=OMFITmdsValue(server=’DIII-D’, treename=’ELECTRONS’, shot=145419, TDI=’\ELECTRONS::TOP.PROFILE_FITS.ZIPFIT.EDENSFIT’) >> x=tmp.dim_of(0) >> y=tmp.data() >> pyplot.plot(x,y.T)

To resample data on the server side, which can be much faster when dealing with large data and slow connections, provide t_start, t_end, and dt in the system’s time units (ms or s). For example, to get stored energy from EFIT01 from 0 to 5000 ms at 50 ms intervals (DIII-D uses time units of ms; some other facilities like KSTAR use seconds):

>> wmhd = OMFITmdsValue(server=’DIII-D’, shot=154749, treename=’EFIT01’, TDI=r’resample(WMHD, 0, 5000, 50)’)

The default resample function uses linear interpolation. To override it and use a different method, see http://www.mdsplus.org/index.php/Documentation:Users:LongPulseExtensions#Overriding_Re-sampling_Procedures

To access DIII-D PTDATA, set treename=’PTDATA’ and use the TDI keyword to pass signal to be retrieved. Note: PTDATA ical option can be passed by setting the shot keyword as a string with the shot number followed by the ical option separated by comma. e.g. shot=’145419,1’

>> tmp=OMFITmdsValue(‘DIII-D’, treename=’PTDATA’, shot=145419, TDI=’ip’) >> x=tmp.dim_of(0) >> y=tmp.data() >> pyplot.plot(x,y)

dim_of(index)[source]
Parameters

index – index of the dimension

Returns

array with the dimension of the MDS+ data for its dimension index

units()[source]
Returns

string of the units of the MDS+ data

units_dim_of(index)[source]
Parameters

index – : index of the dimension

Returns

string of the units of the MDS+ dimension index

error_dim_of(index)[source]
Parameters

index – index of the dimension

Returns

array with the dimension of the MDS+ error for its dimension index

xarray()[source]
Returns

DataArray with information from this node

property MDSconn
write(xarray_data)[source]

write data to node

Parameters

xarray_data – multidimensional data to be written to MDS+ node in the format of an xarray.DataArray

Returns

outcome of MDSplus.put command

fetch()[source]

Connect to MDS+ server and fetch ALL data: data, units, dim_of, units_dim_of, error, error_dim_of

Returns

None

data(cut_value=None, cut_dim=0, interp='nearest', bounds_error=True)[source]
Parameters
  • cut_value – value of dimension cut_dim along which to perform the cut

  • cut_dim – dimension along which to perform the cut

  • interp – interpolation of the cut: nearest, linear, cubic

  • bounds_error – whether an error is raised if cut_value exceeds the range of dim_of(cut_dim)

Returns

If cut_value is None then return the MDS data. If cut_value is not None, then return data value at cut_value for dimension cut_dim

error(cut_value=None, cut_dim=0, interp='nearest', bounds_error=True)[source]
Parameters
  • cut_value – value of dimension cut_dim along which to perform the cut

  • cut_dim – dimension along which to perform the cut

  • interp – interpolation of the cut: nearest, linear, cubic

  • bounds_error – whether an error is raised if cut_value exceeds the range of dim_of(cut_dim)

Returns

If cut_value is None then return the MDS error. If cut_value is not None, then return error value at cut_value for dimension cut_dim

property help
check(check_dim_of=0, debug=False)[source]

Checks whether a valid result has been obtained or not (could be missing data, bad pointname, etc.)

Parameters
  • check_dim_of – int if check_dim_of >= 0: check that .dim_of(check_dim_of) is valid else: disable additional check

  • debug – bool Return a dictionary containing all the flags and relevant information instead of a single bool

Returns

bool or dict False if a data problem is detected, otherwise True. If debug is true: dict containing debugging information relating to the final flag

plot(*args, **kw)[source]

plot method based on xarray

Returns

current axes

class omfit_classes.omfit_mds.OMFITmds(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_mds.OMFITmdsObjects

This class returns the structure of an MDS+ tree. Objects within this tree are OMFITmdsValue objects

Parameters
  • server – string MDS+ server or Tokamak device (e.g. atlas.gat.com or DIII-D)

  • treename – string MDS+ tree

  • shot – int MDS+ shot

  • subtree – string subtree

  • caching – bool OMFITmdsValue use caching system or not

  • quiet – bool print when retrieving data from MDS+

load()[source]

Load the MDS+ tree structure

class omfit_classes.omfit_mds.OMFITmdsObjects(server, treename, shot, quiet=True)[source]

Bases: object

idConn()[source]
idConnStr()[source]
interpret_id()[source]
class omfit_classes.omfit_mds.OMFITmdsConnection(server, *args, **kw)[source]

Bases: object

This class that provides a convenient interface to the OMFITmdsConnectionBaseClass Specifically it allows specifiying series of commands (mdsvalue, write, write_dataset, tcl, test_connection) without having to re-type the server for each command.

mdsvalue(treename, shot, what, timeout=None)[source]
write(treename, shot, node, data, error=None, units=None)[source]
write_dataset(treename, shot, subtree, xarray_dataset, quantities=None, translator=None)[source]
create_model_tree(treename, subtree, data, clear_subtree=False)[source]
tcl(tcl)[source]
test_connection()[source]
omfit_classes.omfit_mds.MDSplusConnection(server, cached=True, quiet=True)[source]

Return a low-level MDSplus.Connection object to the specified MDS+ server taking care of establising a tunneled connection if necessary

Parameters
  • server – MDS+ server to connect to (also interprets machine names)

  • cached – whether to return an existing connection to the MDS+ server if one already exists

  • quiet – verbose or not

Returns

MDSplus.Connection object

omfit_classes.omfit_mds.translate_MDSserver(server, treename)[source]

This function maps mnemonic names to real MDS+ servers

Parameters
  • server – mnemonic MDS+ name (eg. device name, like DIII-D)

  • treename – this is to allow selection of a server depending on the treename that is accessed

omfit_classes.omfit_mds.tunneled_MDSserver(MDSserver, quiet=True, forceRemote=False)[source]
omfit_classes.omfit_mds.solveMDSserver(server)[source]
omfit_classes.omfit_mds.OMFITmdsCache(cachesDir=[], limit=[])[source]

Utility function to manage OMFIT MDS+ cache

Parameters
  • cachesDir – directory to use for off-line storage of data If True it defaults to OMFIT[‘MainSettings’][‘SETUP’][‘cachesDir’] If False, then off-line MDS+ caching is disabled If a string, then that value is used

  • limit – limit number of elements for in-memory caching

NOTE: off-line caching can be achieved via:

>> # off-line caching controlled by OMFIT[‘MainSettings’][‘SETUP’][‘cachesDir’] >> OMFIT[‘MainSettings’][‘SETUP’][‘cachesDir’] = ‘/path/to/where/MDS/cache/data/resides’ >> OMFITmdsCache(cachesDir=True) >> >> # off-line caching for this OMFIT session to specific folder >> OMFITmdsCache(cachesDir=’/path/to/where/MDS/cache/data/resides’) >> >> # purge off-line caching (clears directory based on cachesDir) >> OMFITmdsCache().purge() >> >> # disable off-line caching for this OMFIT session >> OMFITmdsCache(cachesDir=False) >> >> # disable default off-line caching >> OMFIT[‘MainSettings’][‘SETUP’][‘cachesDir’]=False

omfit_classes.omfit_mds.interpret_signal(server=None, shot=None, treename=None, tdi=None, scratch=None)[source]

Interprets a signal like abc * def by making multiple MDSplus calls.

MDSplus might be able to interpret expressions like abc*def by itself, but sometimes this doesn’t work (such as when the data are really in PTDATA?)

You might also have already cached or want to cache abc and def locally because you need them for other purposes.

Parameters
  • server – string The name of the MDSplus server to connect to

  • shot – int Shot number to query

  • treename – string or None Name of the MDSplus tree. None is for connecting to PTDATA at DIII-D.

  • tdi – string A pointname or expression containing pointnames Use ‘testytest’ as a fake pointname to avoid connecting to MDSplus during testing

  • scratch – dict-like [optional] Catch intermediate quantities for debugging by providing a dict

Returns

(array, array, string/None, string/None) x: independent variable y: dependent variable as a function of x units: units of y or None if not found xunits: units of x or None if not found

omfit_classes.omfit_mds.available_efits_from_mds(scratch_area, device, shot, list_empty_efits=False, default_snap_list=None, format='{tree}')[source]

Attempts to lookup a list of available EFITs from MDSplus

Works for devices that store EFITs together in a group under a parent tree, such as: EFIT (parent tree)

|- EFIT01 (results from an EFIT run) |- EFIT02 (results from another run) |- EFIT03 |- …

If the device’s MDSplus tree is not arranged like this, it will fail and return [].

Requires a single MDSplus call

Parameters
  • scratch_area – dict Scratch area for storing results to reduce repeat calls. Mainly included to match call sigure of available_efits_from_rdb(), since OMFITmdsValue already has caching.

  • device – str Device name

  • shot – int Shot number

  • list_empty_efits – bool List all EFITs includeing these without any data

  • default_snap_list – dict [optional] Default set of EFIT treenames. Newly discovered ones will be added to the list.

  • format – str Instructions for formatting data to make the EFIT tag name. Provided for compatibility with available_efits_from_rdb() because the only option is ‘{tree}’.

Returns

(dict, str) Dictionary keys will be descriptions of the EFITs

Dictionary values will be the formatted identifiers. For now, the only supported format is just the treename. If lookup fails, the dictionary will be {‘’: ‘’} or will only contain default results, if any.

String will contain information about the discovered EFITs

omfit_classes.omfit_mds.read_vms_date(vms_timestamp)[source]

Interprets a VMS date

VMS dates are ticks since 1858-11-17 00:00:00, where each tick is 100 ns Unix ticks are seconds since 1970-01-01 00:00:00 GMT or 1969-12-31 16:00:00 This function may be useful because MDSplus dates, at least for KSTAR, are recorded as VMS timestamps, which are not understood by datetime.

Parameters

vms_timestamp – unsigned int

Returns

datetime.datetime

omfit_mmm

class omfit_classes.omfit_mmm.OMFITmmm(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

OMFIT class used to load from Multi Mode Model output files :param filename: filename passed to OMFITascii class :param **kw: keyword dictionary passed to OMFITascii class

load()[source]

omfit_namelist

class omfit_classes.omfit_namelist.OMFITnamelist(*args, **kwargs)[source]

Bases: omfit_classes.namelist.NamelistFile, omfit_classes.omfit_ascii.OMFITascii

OMFIT class used to interface with FORTRAN namelist files

Parameters
  • filename – filename to be parsed

  • input_string – input string to be parsed (preceeds filename)

  • nospaceIsComment – whether a line which starts without a space should be retained as a comment. If None, a “smart” guess is attempted

  • outsideOfNamelistIsComment – whether the content outside of the namelist blocks should be retained as comments. If None, a “smart” guess is attempted

  • retain_comments – whether comments should be retained or discarded

  • skip_to_symbol – string to jump to for the parsing. Content before this string is ignored

  • collect_arrays – whether arrays defined throughout the namelist should be collected into single entries (e.g. a=5,a(1,4)=0)

  • multiDepth – wether nested namelists are allowed

  • bang_comment_symbol – string containing the characters that should be interpreted as comment delimiters.

  • equals – how the equal sign should be written when saving the namelist

  • compress_arrays – compress repeated elements in an array by using v*n namelist syntax

  • max_array_chars – wrap long array lines

  • explicit_arrays – (True,False,1) whether to place name(1) in front of arrays. If 1 then (1) is only placed in front of arrays that have only one value.

  • separator_arrays – characters to use between array elements

  • split_arrays – write each array element explicitly on a separate line Specifically this functionality was introduced to split TRANSP arrays

  • idlInput – whether to interpret the namelist as IDL code

load()[source]

Method used to load the content of the file specified in the .filename attribute

Returns

None

property equals
save()[source]

Method used to save the content of the object to the file specified in the .filename attribute

Returns

None

class omfit_classes.omfit_namelist.OMFITfortranNamelist(*args, **kwargs)[source]

Bases: omfit_classes.omfit_namelist.OMFITnamelist

OMFIT class used to interface with FORTRAN namelist files with arrays indexed according to FORTRAN indexing convention

Parameters
  • filename – filename to be parsed

  • input_string – input string to be parsed (preceeds filename)

  • nospaceIsComment – whether a line which starts without a space should be retained as a comment. If None, a “smart” guess is attempted

  • outsideOfNamelistIsComment – whether the content outside of the namelist blocks should be retained as comments. If None, a “smart” guess is attempted

  • retain_comments – whether comments should be retained or discarded

  • skip_to_symbol – string to jump to for the parsing. Content before this string is ignored

  • collect_arrays – whether arrays defined throughout the namelist should be collected into single entries (e.g. a=5,a(1,4)=0)

  • multiDepth – wether nested namelists are allowed

  • bang_comment_symbol – string containing the characters that should be interpreted as comment delimiters.

  • equals – how the equal sign should be written when saving the namelist

  • compress_arrays – compress repeated elements in an array by using v*n namelist syntax

  • max_array_chars – wrap long array lines

  • explicit_arrays – (True,False,1) whether to place name(1) in front of arrays. If 1 then (1) is only placed in front of arrays that have only one value.

  • separator_arrays – characters to use between array elements

  • split_arrays – write each array element explicitly on a separate line Specifically this functionality was introduced to split TRANSP arrays

  • idlInput – whether to interpret the namelist as IDL code

class omfit_classes.omfit_namelist.fortran_environment(nml)[source]

Bases: object

Environment class used to allow FORTRAN index convention of sparrays in a namelist

class omfit_classes.omfit_namelist.sparray(shape=None, default=nan, dtype=None, wrap_dim=0, offset=0, index_offset=False)[source]

Bases: object

Class for n-dimensional sparse array objects using Python’s dictionary structure. based upon: http://www.janeriksolem.net/2010/02/sparray-sparse-n-dimensional-arrays-in.html

dense()[source]

Convert to dense NumPy array

sum()[source]

Sum of elements

fortran(index, value)[source]
isnan(x)[source]
fortran_repr()[source]
lt(other)[source]
le(other)[source]
eq(other)[source]
ge(other)[source]
gt(other)[source]
is_(other)[source]
is_not(other)[source]
add(other)[source]
and_(other)[source]
floordiv(other)[source]
index(other)[source]
lshift(other)[source]
mod(other)[source]
mul(other)[source]
matmul(other)[source]
or_(other)[source]
pow(other)[source]
rshift(other)[source]
sub(other)[source]
truediv(other)[source]
xor(other)[source]
bool()[source]
real()[source]
imag()[source]
not_()[source]
truth()[source]
abs()[source]
inv()[source]
invert()[source]
neg()[source]
pos()[source]
copy()[source]

omfit_nc

class omfit_classes.omfit_nc.OMFITnc(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.startup_framework.OMFITobject

OMFIT class used to interface with NETCDF files This class is based on the netCDF4 library which supports the following file formats: ‘NETCDF4’, ‘NETCDF4_CLASSIC’, ‘NETCDF3_CLASSIC’, ‘NETCDF3_64BIT’ NOTE: This class constains OMFITncData class objects.

Parameters
  • filename – filename passed to OMFITobject class

  • **kw – keyword dictionary passed to OMFITobject class and the netCDF4.Dataset() method at loading

load(**kw)[source]

Method used to load the content of the file specified in the .filename attribute

Parameters

**kw – keyword dictionary which is passed to the netCDF4.Dataset function

Returns

None

save(zlib=False, complevel=4, shuffle=True, quiet=False, force=False)[source]

Method used to save the content of the object to the file specified in the .filename attribute

The zlib, complevel, and shuffle keywords are passed on to the createVariable method.

Parameters
  • zlib – default False, switch on netCDF compression.

  • complevel – default 4, compression level (1=fastest, 9=slowest).

  • shuffle – default True, shuffle data.

  • quiet – default False, suppress output to the console.

  • force – default False, force save from scratch/as new file.

Returns

None

class omfit_classes.omfit_nc.OMFITncData(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict

Class which takes care of converting netCDF variables into SortedDict to be used in OMFIT OMFITncData object are intended to be contained in OMFITnc objects

Parameters
  • variable

    • if None then returns

    • if a netCDF4.Variable object then data is read from that variable

    • if anything else, this is used as the data

    • if a string and filename is set then this is the name of the variable to read from the file

  • dimension

    • Not used if variable is a netCDF4.Variable object or None

    • If None, then the dimension of the variable is automatically set

    • If not None sets the dimension of the variable

    • Ignored if filename is set

  • dtype

    • Not used if variable is a netCDF4.Variable object or None

    • If None, then the data type of the variable is automatically set

    • If not None sets the data type of the variable

    • Ignored if filename is set

  • filename

    • this is the filename from which variables will be read

    • if None then no dynamic loading

load(variable=None, dimension=None, dtype=None)[source]
class omfit_classes.omfit_nc.OMFITncDataTmp(*args, **kwargs)[source]

Bases: omfit_classes.omfit_nc.OMFITncData

Same class as OMFITncData but this type of object will not be saved into the NetCDF file. This is useful if one wants to create “shadow” NetCDF variables into OMFIT without altering the original NetCDF file.

omfit_nimrod

class omfit_classes.omfit_nimrod.OMFITnimrod(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.startup_framework.OMFITobject

OMFIT class used to interface with NIMROD output files Note: does not support writing

Parameters
  • filename – filename passed to OMFITobject class

  • **kw – keyword dictionary passed to OMFITobject class

contour_labels = ['B0_r [T]', 'B0_z [T]', 'B0_{\\phi} [T]', 'J0_r [A/m^2]', 'J0_z [A/m^2]', 'J0_{\\phi} [A/m^2]', 'V0_r [m/s]', 'V0_z [m/s]', 'V0_{\\phi} [m/s]', 'P_0 [Pa]', 'P_{0e} [Pa]', 'n0 [m^{-3}]', 'diff shape', 'B_r [T]', 'B_z [T]', 'B_{\\phi} [T]', 'Im B_r [T]', 'Im B_z [T]', 'Im B_{\\phi} [T]', 'J_r [A/m^2]', 'J_z [A/m^2]', 'J_{\\phi} [A/m^2]', 'Im J_r [A/m^2]', 'Im J_z [A/m^2]', 'Im J_{\\phi} [A/m^2]', 'V_r [m/s]', 'V_z [m/s]', 'V_{\\phi} [m/s]', 'Im V_r [m/s]', 'Im V_z [m/s]', 'Im V_{\\phi} [m/s]', 'P [Pa]', 'Im P [Pa]', 'P_e [Pa]', 'Im P_e [Pa]', 'n [m^{-3}]', 'Im n [m^{-3}]', 'conc', 'Im conc', 'T_e [eV]', 'Im T_e [eV]', 'T_i [eV]', 'Im T_i [eV]', 'P_{dilu} [W/m^3]', 'Im P_{dilu} [W/m^3]', 'P_{brem} [W/m^3]', 'Im P_{brem} [W/m^3]', 'P_{rec} [W/m^3]', 'Im P_{rec} [W/m^3]', 'P_{line} [W/m^3]', 'Im P_{line} [W/m^3]', 'P_{ion} [W/m^3]', 'Im P_{ion} [W/m^3]', 'n_imp [m^{-3}]', 'Im n_imp [m^{-3}]', 'n_{ele} [m^{-3}]', 'Im n_{ele} [m^{-3}]', 'n_{neut} [m^{-3}]', 'Im n_{neut} [m^{-3}]', 'n_{imp neut} [m^{-3}]', 'n_{imp +1} [m^{-3}]', 'n_{imp +2} [m^{-3}]', 'n_{imp +3} [m^{-3}]', 'n_{imp +4} [m^{-3}]', 'n_{imp +5} [m^{-3}]', 'n_{imp +6} [m^{-3}]', 'n_{imp +7} [m^{-3}]', 'n_{imp +8} [m^{-3}]', 'n_{imp +9} [m^{-3}]', 'n_{imp +10} [m^{-3}]', 'n_{imp +11} [m^{-3}]', 'n_{imp +12} [m^{-3}]', 'n_{imp +13} [m^{-3}]', 'n_{imp +14} [m^{-3}]', 'n_{imp +15} [m^{-3}]', 'n_{imp +16} [m^{-3}]', 'n_{imp +17} [m^{-3}]', 'n_{imp +18} [m^{-3}]']
discharge_labels = ['istep', 'time', 'divB', 'totE', 'totIE', 'totIEe', 'totIEi', 'lnE', 'lnIE', 'grate', 'Itot', 'Ipert', 'Vloop', 'totflux', 'n0flux', 'bigF', 'Theta', 'magCFL', 'NLCFL', 'flowCFL']
polflux_labels = ['R', 'Z', 'polflux', 'R*Bphi']
flsurf_labels = ['t', 'ix', 'polm', 'sqrt(polm)', 'polp', 'sqrt(polp)', 'q_surf', 'par_surf']
SPIhist_labels = ['thstp', 'thtm', 'ip', 'Rp', 'Zp', 'Pp', 'radius', 'aden', 'taden', 'eden', 'eT', 'ablt', 'tablt', 'asim', 'tsim']
energy_labels = ['step', 'time', 'imode', 'k', 'E_mag', 'E_kin', 'prad']
kpraden_labels = ['istep', 'time', 'qlosd', 'qloso', 'qlosb', 'qlosr', 'qlosl', 'qlosi', 'Nz', 'Ne', 'Nz+zi', 'qlost', 'elosd', 'eloso', 'elosb', 'elosr', 'elosl', 'elosi', 'elost']
kpradnz_labels = ['istep', 'time', 'Nz+2', 'Nz+3', 'Nz+4', 'Nz+5', 'Nz+6', 'Nz+7', 'Nz+8', 'Nz+9', 'Nz+10']
xy_slice_labels = ['ix/mx', 'iy/my', 'R', 'Z', 'B0R', 'B0Z', 'B0Phi', 'J0R', 'J0Z', 'J0Phi', 'V0R', 'V0Z', 'V0Phi', 'P0', 'PE0', 'n0', 'diff shape', 'BR', 'BZ', 'BP', 'Im BR', 'Im BZ', 'Im BP', 'JR', 'JZ', 'JP', 'Im JR', 'Im JZ', 'Im JP', 'VR', 'VZ', 'VP', 'Im VR', 'Im VZ', 'Im VP', 'P', 'Im P', 'PE', 'Im PE', 'n', 'Im n', 'conc', 'Im conc', 'Te', 'Im Te', 'Ti', 'Im Ti']
xt_slice_labels = ['ix/mx', 'istep', 't', 'R', 'Z', 'B0R', 'B0Z', 'B0Phi', 'J0R', 'J0Z', 'J0Phi', 'V0R', 'V0Z', 'V0Phi', 'P0', 'PE0', 'n0', 'diff shape', 'BR', 'BZ', 'BP', 'Im BR', 'Im BZ', 'Im BP', 'JR', 'JZ', 'JP', 'Im JR', 'Im JZ', 'Im JP', 'VR', 'VZ', 'VP', 'Im VR', 'Im VZ', 'Im VP', 'P', 'Im P', 'PE', 'Im PE', 'n', 'Im n', 'conc', 'Im conc', 'Te', 'Im Te', 'Ti', 'Im Ti']
xt_slice_kprad_labels = ['nimp', 'Im nimp', 'ne', 'Im ne', 'qlosd', 'Im qlosd', 'qloso', 'Im qloso', 'qlosb', 'Im qlosb', 'qlosr', 'Im qlosr', 'qlosl', 'Im qlosl', 'qlosi', 'Im qlosi', 'ndn', 'Im ndn']
nimhist_labels = ['istep', 't', 'imode', 'k', 'Br', 'Bz', 'Bphi', 'Im Br', 'Im Bz', 'Im Bphi', 'Jr', 'Jz', 'Jphi', 'Im Jr', 'Im Jz', 'Im Jphi', 'Vr', 'Vz', 'Vphi', 'Im Vr', 'Im Vz', 'Im Vphi', 'P', 'Im P', 'Pe', 'Im Pe', 'n', 'Im n', 'conc', 'Im conc', 'Te', 'Im Te', 'Ti', 'Im Ti']
load()[source]
plot(slice=0, phase=0, nlevels=21, linewidth=1, linestyle='-', color=None, alpha=1)[source]

Default plot of energy.bin, discharge.bin, contour.bin and _.bin files

Parameters
  • slice – (contour.bin) slice to plot

  • phase – (contour.bin) plot real part of complex quantities multiplied by np.exp(1j*phase)

  • nlevels – (contour.bin) number of levels in the contours

iz = 18

omfit_oedge

class omfit_classes.omfit_oedge.OMFIToedgeInput(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

OEDGE input data file

setloc(casename=None, caseloc=None, root=None)[source]
load()[source]
create_output()[source]
altsave(filename=None)[source]
save(**kw)[source]

The save method is supposed to be overridden by classes which use OMFITobject as a superclass. If left as it is this method can detect if .filename was changed and if so, makes a copy from the original .filename (saved in the .link attribute) to the new .filename

class omfit_classes.omfit_oedge.OMFIToedgeRun(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict

class omfit_classes.omfit_oedge.OMFIToedgeNC(*args, **kwargs)[source]

Bases: omfit_classes.omfit_nc.OMFITnc

OEDGE output NC data file

plot_types_available = ['contour', 'along ring']
bg_plots = {'Drifts': {'data': ['V_pol', 'V_rad'], 'group': True, 'label': 'Drifts', 'lineaxes': ['P', 'S'], 'notebook': False}, 'E': {'data': ['KES'], 'group': False, 'label': 'Epara', 'lineaxes': ['P', 'S'], 'notebook': False, 'targ': ['KEDS'], 'units': 'V/m'}, 'Efields': {'data': ['E', 'ExB Pol', 'ExB Rad'], 'group': True, 'label': 'Electric Fields', 'lineaxes': ['P', 'S'], 'notebook': False}, 'ExB Pol': {'data': ['E_POL'], 'group': False, 'label': 'Epol', 'lineaxes': ['P', 'S'], 'notebook': False, 'targ': None, 'units': 'V/m'}, 'ExB Rad': {'data': ['E_RAD'], 'group': False, 'label': 'Epol', 'lineaxes': ['P', 'S'], 'notebook': False, 'targ': None, 'units': 'V/m'}, 'Plasma': {'data': ['ne', 'Te', 'Ti', 'Vb'], 'group': True, 'label': 'Background Plasma', 'lineaxes': ['P', 'S'], 'notebook': False}, 'Te': {'data': ['KTEBS'], 'group': False, 'label': 'Te', 'lineaxes': ['P', 'S'], 'notebook': False, 'targ': ['KTEDS'], 'units': 'eV'}, 'Ti': {'data': ['KTIBS'], 'group': False, 'label': 'Ti', 'lineaxes': ['P', 'S'], 'notebook': False, 'targ': ['KTIDS'], 'units': 'eV'}, 'V_pol': {'data': ['EXB_P'], 'group': False, 'label': 'Vpol', 'lineaxes': ['P', 'S'], 'notebook': False, 'targ': None, 'units': 'm/s'}, 'V_rad': {'data': ['EXB_R'], 'group': False, 'label': 'Vrad', 'lineaxes': ['P', 'S'], 'notebook': False, 'targ': None, 'units': 'm/s'}, 'Vb': {'data': ['KVHS'], 'group': False, 'label': 'Vpara', 'lineaxes': ['P', 'S'], 'notebook': False, 'scale': '1/QTIM', 'targ': ['KVDS'], 'units': 'm/s'}, 'ne': {'data': ['KNBS'], 'group': False, 'label': 'Density', 'lineaxes': ['P', 'S'], 'notebook': False, 'targ': ['KNDS'], 'units': 'm-3'}}
h_plots = {'H Atom Temp': {'data': ['PINENA'], 'group': False, 'label': 'Hydrogen Atom Temperature', 'lineaxes': ['P', 'S'], 'notebook': False, 'targ': None, 'units': 'eV'}, 'H Dalpha': {'data': ['PINALP'], 'group': False, 'label': 'Hydrogen Dalpha Emission', 'lineaxes': ['P', 'S'], 'notebook': False, 'targ': None, 'units': 'ph/m3/s'}, 'H Elec Energy Loss': {'data': ['PINQE'], 'group': False, 'label': 'Hydrogen-Electron Energy Loss Term', 'lineaxes': ['P', 'S'], 'notebook': False, 'targ': None, 'units': 'W/m3(?)'}, 'H Ion Energy Loss': {'data': ['PINQI'], 'group': False, 'label': 'Hydrogen-Ion Energy Loss Term', 'lineaxes': ['P', 'S'], 'notebook': False, 'targ': None, 'units': 'W/m3(?)'}, 'H Ionization': {'data': ['PINION'], 'group': False, 'label': 'Hydrogen Ionization', 'lineaxes': ['P', 'S'], 'notebook': False, 'targ': None, 'units': '/m3/s'}, 'H Line': {'data': ['HLINES'], 'group': False, 'label': 'Hydrogen Line Radiation', 'lineaxes': ['P', 'S'], 'notebook': False, 'targ': None, 'units': 'W/m3'}, 'H Mol Temp': {'data': ['PINENM'], 'group': False, 'label': 'Hydrogen Molecule Temperature', 'lineaxes': ['P', 'S'], 'notebook': False, 'targ': None, 'units': 'eV'}, 'H Power': {'data': ['HPOWLS'], 'group': False, 'label': 'Hydrogen Radiated Power', 'lineaxes': ['P', 'S'], 'notebook': False, 'targ': None, 'units': 'W/m3'}, 'H Quantities': {'data': ['H0 Density', 'H2 Density', 'H Ionization', 'H Recomb'], 'group': False, 'label': 'Hydrogen Quantities', 'lineaxes': ['P', 'S'], 'notebook': False}, 'H Recomb': {'data': ['PINREC'], 'group': False, 'label': 'Hydrogen Recombination', 'lineaxes': ['P', 'S'], 'notebook': False, 'targ': None, 'units': '/m3/s'}, 'H0 Density': {'data': ['PINATO'], 'group': False, 'label': 'Hydrogen Atom Density', 'lineaxes': ['P', 'S'], 'notebook': False, 'targ': None, 'units': '/m3'}, 'H2 Density': {'data': ['PINMOL'], 'group': False, 'label': 'Hydrogen Molecule Density', 'lineaxes': ['P', 'S'], 'notebook': False, 'targ': None, 'units': '/m3'}}
imp_plots = {'Imp Density': {'data': ['DDLIMS'], 'group': False, 'label': 'Impurity Density', 'lineaxes': ['P', 'S'], 'notebook': False, 'scale': 'ABSFAC', 'targ': None, 'units': '/m3'}, 'Imp Ionization': {'data': ['TIZS'], 'group': False, 'label': 'Impurity Ionization', 'lineaxes': ['P', 'S'], 'notebook': False, 'scale': 'ABSFAC', 'targ': None, 'units': '/m3/s'}, 'Imp Radiated Power': {'data': ['POWLS'], 'group': False, 'label': 'Impurity Radiated Power', 'lineaxes': ['P', 'S'], 'notebook': False, 'scale': 'ABSFAC', 'targ': None, 'units': 'W/m3'}, 'Imp Temperature': {'data': ['DDTS'], 'group': False, 'label': 'Impurity Temperature', 'lineaxes': ['P', 'S'], 'notebook': False, 'targ': None, 'units': 'eV'}, 'Impurity Quantities': {'data': ['Imp Density', 'Imp Temperature', 'Imp Ionization', 'Imp Radiated Power'], 'group': True, 'label': 'Impurity Quantities', 'lineaxes': ['P', 'S'], 'notebook': False}}
surface_plots = {}
diagnostic_plots = {}
plot_types = ['contour-polygon', 'contour-interpolate', 'along ring', 'surface']
all_plots = {'Drifts': {'data': ['V_pol', 'V_rad'], 'group': True, 'label': 'Drifts', 'lineaxes': ['P', 'S'], 'notebook': False}, 'E': {'data': ['KES'], 'group': False, 'label': 'Epara', 'lineaxes': ['P', 'S'], 'notebook': False, 'targ': ['KEDS'], 'units': 'V/m'}, 'Efields': {'data': ['E', 'ExB Pol', 'ExB Rad'], 'group': True, 'label': 'Electric Fields', 'lineaxes': ['P', 'S'], 'notebook': False}, 'ExB Pol': {'data': ['E_POL'], 'group': False, 'label': 'Epol', 'lineaxes': ['P', 'S'], 'notebook': False, 'targ': None, 'units': 'V/m'}, 'ExB Rad': {'data': ['E_RAD'], 'group': False, 'label': 'Epol', 'lineaxes': ['P', 'S'], 'notebook': False, 'targ': None, 'units': 'V/m'}, 'H Atom Temp': {'data': ['PINENA'], 'group': False, 'label': 'Hydrogen Atom Temperature', 'lineaxes': ['P', 'S'], 'notebook': False, 'targ': None, 'units': 'eV'}, 'H Dalpha': {'data': ['PINALP'], 'group': False, 'label': 'Hydrogen Dalpha Emission', 'lineaxes': ['P', 'S'], 'notebook': False, 'targ': None, 'units': 'ph/m3/s'}, 'H Elec Energy Loss': {'data': ['PINQE'], 'group': False, 'label': 'Hydrogen-Electron Energy Loss Term', 'lineaxes': ['P', 'S'], 'notebook': False, 'targ': None, 'units': 'W/m3(?)'}, 'H Ion Energy Loss': {'data': ['PINQI'], 'group': False, 'label': 'Hydrogen-Ion Energy Loss Term', 'lineaxes': ['P', 'S'], 'notebook': False, 'targ': None, 'units': 'W/m3(?)'}, 'H Ionization': {'data': ['PINION'], 'group': False, 'label': 'Hydrogen Ionization', 'lineaxes': ['P', 'S'], 'notebook': False, 'targ': None, 'units': '/m3/s'}, 'H Line': {'data': ['HLINES'], 'group': False, 'label': 'Hydrogen Line Radiation', 'lineaxes': ['P', 'S'], 'notebook': False, 'targ': None, 'units': 'W/m3'}, 'H Mol Temp': {'data': ['PINENM'], 'group': False, 'label': 'Hydrogen Molecule Temperature', 'lineaxes': ['P', 'S'], 'notebook': False, 'targ': None, 'units': 'eV'}, 'H Power': {'data': ['HPOWLS'], 'group': False, 'label': 'Hydrogen Radiated Power', 'lineaxes': ['P', 'S'], 'notebook': False, 'targ': None, 'units': 'W/m3'}, 'H Quantities': {'data': ['H0 Density', 'H2 Density', 'H Ionization', 'H Recomb'], 'group': False, 'label': 'Hydrogen Quantities', 'lineaxes': ['P', 'S'], 'notebook': False}, 'H Recomb': {'data': ['PINREC'], 'group': False, 'label': 'Hydrogen Recombination', 'lineaxes': ['P', 'S'], 'notebook': False, 'targ': None, 'units': '/m3/s'}, 'H0 Density': {'data': ['PINATO'], 'group': False, 'label': 'Hydrogen Atom Density', 'lineaxes': ['P', 'S'], 'notebook': False, 'targ': None, 'units': '/m3'}, 'H2 Density': {'data': ['PINMOL'], 'group': False, 'label': 'Hydrogen Molecule Density', 'lineaxes': ['P', 'S'], 'notebook': False, 'targ': None, 'units': '/m3'}, 'Imp Density': {'data': ['DDLIMS'], 'group': False, 'label': 'Impurity Density', 'lineaxes': ['P', 'S'], 'notebook': False, 'scale': 'ABSFAC', 'targ': None, 'units': '/m3'}, 'Imp Ionization': {'data': ['TIZS'], 'group': False, 'label': 'Impurity Ionization', 'lineaxes': ['P', 'S'], 'notebook': False, 'scale': 'ABSFAC', 'targ': None, 'units': '/m3/s'}, 'Imp Radiated Power': {'data': ['POWLS'], 'group': False, 'label': 'Impurity Radiated Power', 'lineaxes': ['P', 'S'], 'notebook': False, 'scale': 'ABSFAC', 'targ': None, 'units': 'W/m3'}, 'Imp Temperature': {'data': ['DDTS'], 'group': False, 'label': 'Impurity Temperature', 'lineaxes': ['P', 'S'], 'notebook': False, 'targ': None, 'units': 'eV'}, 'Impurity Quantities': {'data': ['Imp Density', 'Imp Temperature', 'Imp Ionization', 'Imp Radiated Power'], 'group': True, 'label': 'Impurity Quantities', 'lineaxes': ['P', 'S'], 'notebook': False}, 'Plasma': {'data': ['ne', 'Te', 'Ti', 'Vb'], 'group': True, 'label': 'Background Plasma', 'lineaxes': ['P', 'S'], 'notebook': False}, 'Te': {'data': ['KTEBS'], 'group': False, 'label': 'Te', 'lineaxes': ['P', 'S'], 'notebook': False, 'targ': ['KTEDS'], 'units': 'eV'}, 'Ti': {'data': ['KTIBS'], 'group': False, 'label': 'Ti', 'lineaxes': ['P', 'S'], 'notebook': False, 'targ': ['KTIDS'], 'units': 'eV'}, 'V_pol': {'data': ['EXB_P'], 'group': False, 'label': 'Vpol', 'lineaxes': ['P', 'S'], 'notebook': False, 'targ': None, 'units': 'm/s'}, 'V_rad': {'data': ['EXB_R'], 'group': False, 'label': 'Vrad', 'lineaxes': ['P', 'S'], 'notebook': False, 'targ': None, 'units': 'm/s'}, 'Vb': {'data': ['KVHS'], 'group': False, 'label': 'Vpara', 'lineaxes': ['P', 'S'], 'notebook': False, 'scale': '1/QTIM', 'targ': ['KVDS'], 'units': 'm/s'}, 'ne': {'data': ['KNBS'], 'group': False, 'label': 'Density', 'lineaxes': ['P', 'S'], 'notebook': False, 'targ': ['KNDS'], 'units': 'm-3'}}
plot_categories = ['background', 'hydrogen', 'impurity', 'surface', 'diagnostic']
load()[source]

Method used to load the content of the file specified in the .filename attribute

Parameters

**kw – keyword dictionary which is passed to the netCDF4.Dataset function

Returns

None

load_simulation_data()[source]
plots_available()[source]
get_plot_categories()[source]
get_plots_available(kind=None)[source]
get_plot_types()[source]
get_along_ring_axis_types(plot_select)[source]
need_ionization_state(selection)[source]
need_ring_number(selection)[source]
get_data_2d(dataname, targname=None, charge=None, scalef=1.0)[source]
plot(plot_select, plot_type, axis_type='S', ring_range=[], charge_range=[], zoom=None)[source]
plot_contour(plot_select, plot_type, charge_range=[], zoom=None)[source]
plot_contour_fig(fig, plot_select, plot_type, charge=None, zoom=None)[source]
plot_contour_polygon(fig, ax, dataname, charge=None, zoom=None, scalef=1.0)[source]
plot_contour_interpolate(fig, ax, dataname, targname=None, charge=None, zoom=None, scalef=1.0)[source]
calculate_layout(nplts)[source]
get_names(plot_select, plot_name)[source]
get_data_along_ring(ir, dataname=None, targname=None, charge=None, scalef=1.0)[source]
get_axis(ir, axis_type='S', targname=None)[source]
plot_ring(ir, fig, plot_select, axis_type='S', charge_range=[])[source]
plot_along_ring(plot_select, axis_type, ring_range, charge_range=[], zoom=None)[source]

omfit_omas

omfit_classes.omfit_omas.machine_mappings(machine, branch, user_machine_mappings=None, return_raw_mappings=False, raise_errors=False)[source]

Function to load the json mapping files (local or remote) Allows for merging external mapping rules defined by users. This function sanity-checks and the mapping file and adds extra info required for mapping

Parameters
  • machine – machine for which to load the mapping files

  • branch – GitHub branch from which to load the machine mapping information

  • user_machine_mappings – Dictionary of mappings that users can pass to this function to temporarily use their mappings (useful for development and testinig purposes)

  • return_raw_mappings – Return mappings without following __include__ statements nor resoliving eval2TDI directives

  • raise_errors – raise errors or simply print warnings if something isn’t right

Returns

dictionary with mapping transformations

class omfit_classes.omfit_omas.ODS(imas_version='3.41.0', consistency_check=True, cocos=11, cocosio=None, coordsio=None, unitsio=None, uncertainio=None, dynamic=None)[source]

Bases: collections.abc.MutableMapping

OMAS Data Structure class

Parameters
  • imas_version – IMAS version to use as a constrain for the nodes names

  • consistency_check – whether to enforce consistency with IMAS schema

  • cocos – internal COCOS representation (this can only be set when the object is created)

  • cocosio – COCOS representation of the data that is read/written from/to the ODS

  • coordsio – ODS with coordinates to use for the data that is read/written from/to the ODS

  • unitsio – ODS will return data with units if True

  • uncertainio – ODS will return data with uncertainties if True

  • dynamic – internal keyword used for dynamic data loading

homogeneous_time(key='', default=True)[source]

Dynamically evaluate whether time is homogeneous or not NOTE: this method does not read ods[‘ids_properties.homogeneous_time’] instead it uses the time info to figure it out

Parameters

default – what to return in case no time basis is defined

Returns

True/False or default value (True) if no time basis is defined

time(key='', extra_info=None)[source]

Return the time information for a given ODS location

Parameters
  • key – ods location

  • extra_info – dictionary that will be filled in place with extra information about time

Returns

time information for a given ODS location (scalar or array)

slice_at_time(time=None, time_index=None)[source]

method for selecting a time slice from an time-dependent ODS (NOTE: this method operates in place)

Parameters
  • time – time value to select

  • time_index – time index to select (NOTE: time_index has precedence over time)

Returns

modified ODS

time_index(time, key='')[source]

Return the index of the closest time-slice for a given ODS location

Parameters
  • time – time in second

  • key – ods location

Returns

index (integer) of the closest time-slice

property parent
property location

Property which returns instance of parent ODS

property top

Property which returns instance of top level ODS

property structure

Property which returns structure of current ODS

property imas_version

Property that returns the imas_version of this ods

Returns

string with imas_version

property consistency_check

property that returns whether consistency with IMAS schema is enabled or not

Returns

True/False/’warn’/’drop’/’strict’ or a combination of those strings

property cocos

property that tells in what COCOS format the data is stored internally of the ODS

property cocosio

property that tells in what COCOS format the data will be input/output

property unitsio

property that if data should be returned with units or not

property uncertainio

property that if data should be returned with units or not

property coordsio

property that tells in what COCOS format the data will be input/output

property active_dynamic

property that point to dynamic_ODS object and says if it is active

property dynamic

property that point to dynamic_ODS object

property ulocation
Returns

string with location of this object in universal ODS path format

getraw(key)[source]

Method to access data stored in ODS with no processing of the key, and it is thus faster than the ODS.__getitem__(key) Effectively behaves like a pure Python dictionary/list __getitem__. This method is mostly meant to be used in the inner workings of the ODS class. NOTE: ODS.__getitem__(key, False) can be used to access items in the ODS with disabled cocos and coordinates processing but with support for different syntaxes to access data

Parameters

key – string or integer

Returns

ODS value

same_init_ods(cls=None)[source]

Initializes a new ODS with the same attributes as this one

Returns

new ODS

setraw(key, value)[source]

Method to assign data to an ODS with no processing of the key, and it is thus faster than the ODS.__setitem__(key, value) Effectively behaves like a pure Python dictionary/list __setitem__. This method is mostly meant to be used in the inner workings of the ODS class.

Parameters
  • key – string, integer or a list of these

  • value – value to assign

Returns

value

paths(return_empty_leaves=False, traverse_code_parameters=True, include_structures=False, dynamic=True, verbose=False, **kw)[source]

Traverse the ods and return paths to its leaves

Parameters
  • return_empty_leaves – if False only return paths to leaves that have data if True also return paths to empty leaves

  • traverse_code_parameters – traverse code parameters

  • include_structures – include paths leading to the leaves

  • dynamic – traverse paths that are not loaded in a dynamic ODS

Returns

list of paths that have data

pretty_paths(**kw)[source]

Traverse the ods and return paths that have data formatted nicely

Parameters

**kw – extra keywords passed to the path() method

Returns

list of paths that have data formatted nicely

full_paths(**kw)[source]

Traverse the ods and return paths from root of ODS that have data

Parameters

**kw – extra keywords passed to the path() method

Returns

list of paths that have data

full_pretty_paths(**kw)[source]

Traverse the ods and return paths from root of ODS that have data formatted nicely

Parameters

**kw – extra keywords passed to the full_paths() method

Returns

list of paths that have data formatted nicely

flat(**kw)[source]

Flat dictionary representation of the data

Parameters

**kw – extra keywords passed to the path() method

Returns

OrderedDict with flat representation of the data

keys(dynamic=True)[source]

Return list of keys

Parameters

dynamic – whether dynamic loaded key should be shown. This is True by default because this should be the case for calls that are facing the user. Within the inner workings of OMAS we thus need to be careful and keep track of when this should not be the case. Throughout the library we use dynamic=1 or dynamic=0 for debug purposes, since one can place a conditional breakpoint in this function checking if dynamic is True and self.dynamic to verfy that indeed the dynamic=True calls come from the user and not from within the library itself.

Returns

list of keys

values()an object providing a view on D’s values[source]
items()a set-like object providing a view on D’s items[source]
get(key, default=None)[source]

Check if key is present and if not return default value without creating value in omas data structure

Parameters
  • key – ods location

  • default – default value

Returns

return default if key is not found

setdefault(key, value=None)[source]

Set value if key is not present

Parameters
  • key – ods location

  • value – value to set

Returns

value

copy()[source]
Returns

copy.deepcopy of current ODS object

clear()[source]

remove data from a branch

Returns

current ODS object

copy_attrs_from(ods)[source]

copy omas_ods_attrs attributes from input ods

Parameters

ods – input ods

Returns

self

prune()[source]

Prune ODS branches that are leafless

Returns

number of branches that were pruned

set_time_array(key, time_index, value)[source]

Convenience function for setting time dependent arrays

Parameters
  • key – ODS location to edit

  • time_index – time index of the value to set

  • value – value to set

Returns

time dependent array

update(ods2)[source]

Adds ods2’s key-values pairs to the ods

Parameters

ods2 – dictionary or ODS to be added into the ODS

list_coordinates(absolute_location=True)[source]

return dictionary with coordinates in a given ODS

Parameters

absolute_location – return keys as absolute or relative locations

Returns

dictionary with coordinates

coordinates(key=None)[source]

return dictionary with coordinates of a given ODS location

Parameters

key – ODS location to return the coordinates of Note: both the key location and coordinates must have data

Returns

OrderedDict with coordinates of a given ODS location

search_paths(search_pattern, n=None, regular_expression_startswith='')[source]

Find ODS locations that match a pattern

Parameters
  • search_pattern – regular expression ODS location string

  • n – raise an error if a number of occurrences different from n is found

  • regular_expression_startswith – indicates that use of regular expressions in the search_pattern is preceeded by certain characters. This is used internally by some methods of the ODS to force users to use ‘@’ to indicate access to a path by regular expression.

Returns

list of ODS locations matching search_pattern pattern

xarray(key)[source]

Returns data of an ODS location and correspondnig coordinates as an xarray dataset Note that the Dataset and the DataArrays have their attributes set with the ODSs structure info

Parameters

key – ODS location

Returns

xarray dataset

dataset(homogeneous=None)[source]

Return xarray.Dataset representation of a whole ODS

Forming the N-D labeled arrays (tensors) that are at the base of xarrays, requires that the number of elements in the arrays do not change across the arrays of data structures.

Parameters

homogeneous

  • False: flat representation of the ODS

    (data is not collected across arrays of structures)

  • ’time’: collect arrays of structures only along the time dimension

    (always valid for homogeneous_time=True)

  • ’full’: collect arrays of structures along all dimensions
    (may be valid in many situations, especially related to

    simulation data with homogeneous_time=True and where for example number of ions, sources, etc. do not vary)

  • None: smart setting, uses homogeneous=’time’ if homogeneous_time=True else False

Returns

xarray.Dataset

satisfy_imas_requirements(attempt_fix=True, raise_errors=True)[source]

Assign .time and .ids_properties.homogeneous_time info for top-level structures since these are required for writing an IDS to IMAS

Parameters
  • attempt_fix – fix dataset_description and wall IDS to have 0 times if none is set

  • raise_errors – raise errors if could not satisfy IMAS requirements

Returns

True if all is good, False if requirements are not satisfied, None if fixes were applied

save(*args, **kw)[source]

Save OMAS data

Parameters
  • filename – filename.XXX where the extension is used to select save format method (eg. ‘pkl’,’nc’,’h5’,’ds’,’json’,’ids’) set to imas, s3, hdc, mongo for load methods that do not have a filename with extension

  • *args – extra arguments passed to save_omas_XXX() method

  • **kw – extra keywords passed to save_omas_XXX() method

Returns

return from save_omas_XXX() method

load(*args, **kw)[source]

Load OMAS data

Parameters
  • filename – filename.XXX where the extension is used to select load format method (eg. ‘pkl’,’nc’,’h5’,’ds’,’json’,’ids’) set to imas, s3, hdc, mongo for save methods that do not have a filename with extension

  • consistency_check – perform consistency check once the data is loaded

  • *args – extra arguments passed to load_omas_XXX() method

  • **kw – extra keywords passed to load_omas_XXX() method

Returns

ODS with loaded data

open(*args, **kw)[source]

Dynamically load OMAS data for seekable storage formats

Parameters
  • filename – filename.XXX where the extension is used to select load format method (eg. ‘nc’,’h5’,’ds’,’json’,’ids’) set to imas, s3, hdc, mongo for save methods that do not have a filename with extension

  • consistency_check – perform consistency check once the data is loaded

  • *args – extra arguments passed to dynamic_omas_XXX() method

  • **kw – extra keywords passed to dynamic_omas_XXX() method

Returns

ODS with loaded data

close()[source]
diff(ods, ignore_type=False, ignore_empty=False, ignore_keys=[], ignore_default_keys=True, rtol=1e-05, atol=1e-08)[source]

return differences between this ODS and the one passed

Parameters
  • ods – ODS to compare against

  • ignore_type – ignore object type differences

  • ignore_empty – ignore emptry nodes

  • ignore_keys – ignore the following keys

  • ignore_default_keys

    ignores the following keys from the comparison

    dataset_description.data_entry.user

    dataset_description.data_entry.run dataset_description.data_entry.machine dataset_description.ids_properties dataset_description.imas_version dataset_description.time ids_properties.homogeneous_time ids_properties.occurrence ids_properties.version_put.data_dictionary ids_properties.version_put.access_layer ids_properties.version_put.access_layer_language

rtol : The relative tolerance parameter

atol : The absolute tolerance parameter

Returns

dictionary with differences

diff_attrs(ods, attrs=['_consistency_check', '_imas_version', '_cocos', '_cocosio', '_coordsio', '_unitsio', '_uncertainio', '_dynamic', '_parent'], verbose=False)[source]

Checks if two ODSs have any difference in their attributes

Parameters
  • ods – ODS to compare against

  • attrs – list of attributes to compare

  • verbose – print differences to stdout

Returns

dictionary with list of attriibutes that have differences, or False otherwise

from_structure(structure, depth=0)[source]

Generate an ODS starting from a hierarchical structure made of dictionaries and lists

Parameters

structure – input structure

Returns

self

codeparams2xml()[source]

Convert code.parameters to a XML string

codeparams2dict()[source]

Convert code.parameters to a CodeParameters dictionary object

sample(ntimes=1, homogeneous_time=None)[source]

Populates the ods with sample data

Parameters
  • ntimes – number of time slices to generate

  • homogeneous_time – only return samples that have ids_properties.homogeneous_time either True or False

Returns

self

document(what=['coordinates', 'data_type', 'documentation', 'units'])[source]

RST documentation of the ODs content

Parameters

what – fields to be included in the documentation if None, all fields are included

Returns

string with RST documentation

to_odx(homogeneous=None)[source]

Generate a ODX from current ODS

Parameters

homogeneous

  • False: flat representation of the ODS

    (data is not collected across arrays of structures)

  • ’time’: collect arrays of structures only along the time dimension

    (always valid for homogeneous_time=True)

  • ’full’: collect arrays of structures along all dimensions
    (may be valid in many situations, especially related to

    simulation data with homogeneous_time=True and where for example number of ions, sources, etc. do not vary)

  • None: smart setting, uses homogeneous=’time’ if homogeneous_time=True else False

Returns

ODX

info(location)[source]

return node info

Parameters

location – location of the node to return info of

Returns

dictionary with info

relax(other, alpha=0.5)[source]

Blend floating point data in this ODS with corresponding floating point in other ODS

Parameters
  • other – other ODS

  • alpha – relaxation coefficient this_ods * (1.0 - alpha) + other_ods * alpha

Returns

list of paths that have been blended

physics_add_phi_to_equilbrium_profiles_1d_ods(time_index)

Adds profiles_1d.phi to an ODS using q :param ods: input ods

Parameters

time_index – time slices to process

physics_add_rho_pol_norm_to_equilbrium_profiles_1d_ods(time_index)
physics_check_iter_scenario_requirements()

Check that the current ODS satisfies the ITER scenario database requirements as defined in https://confluence.iter.org/x/kQqOE

Returns

list of elements that are missing to satisfy the ITER scenario requirements

physics_consistent_times(attempt_fix=True, raise_errors=True)

Assign .time and .ids_properties.homogeneous_time info for top-level structures since these are required for writing an IDS to IMAS

Parameters
  • attempt_fix – fix dataset_description and wall IDS to have 0 times if none is set

  • raise_errors – raise errors if could not satisfy IMAS requirements

Returns

True if all is good, False if requirements are not satisfied, None if fixes were applied

physics_core_profiles_consistent(update=True, use_electrons_density=False, enforce_quasineutrality=False)
Calls all core_profiles consistency functions including
  • core_profiles_densities

  • core_profiles_pressures

  • core_profiles_zeff

Parameters
  • ods – input ods

  • update – operate in place

  • use_electrons_density – denominator is core_profiles.profiles_1d.:.electrons.density instead of sum Z*n_i in Z_eff calculation

  • enforce_quasineutrality – update electron density to be quasineutral with ions

Returns

updated ods

physics_core_profiles_currents(time_index=None, rho_tor_norm=None, j_actuator='default', j_bootstrap='default', j_ohmic='default', j_non_inductive='default', j_total='default', warn=True)

This function sets currents in ods[‘core_profiles’][‘profiles_1d’][time_index]

If provided currents are inconsistent with each other or ods, ods is not updated and an error is thrown.

Updates integrated currents in ods[‘core_profiles’][‘global_quantities’] (N.B.: equilibrium IDS is required for evaluating j_tor and integrated currents)

Parameters
  • ods – ODS to update in-place

  • time_index – ODS time index to updated

if None, all times are updated

Parameters

rho_tor_norm – normalized rho grid upon which each j is given

For each j:
  • ndarray: set in ods if consistent

  • ‘default’: use value in ods if present, else set to None

  • None: try to calculate from currents; delete from ods if you can’t

Parameters
  • j_actuator – Non-inductive, non-bootstrap current <J.B>/B0 N.B.: used for calculating other currents and consistency, but not set in ods

  • j_bootstrap – Bootstrap component of <J.B>/B0

  • j_ohmic – Ohmic component of <J.B>/B0

  • j_non_inductive – Non-inductive component of <J.B>/B0 Consistency requires j_non_inductive = j_actuator + j_bootstrap, either as explicitly provided or as computed from other components.

  • j_total – Total <J.B>/B0 Consistency requires j_total = j_ohmic + j_non_inductive either as explicitly provided or as computed from other components.

physics_core_profiles_densities(update=True, enforce_quasineutrality=False)

Density, density_thermal, and density_fast for electrons and ions are filled and are self-consistent

Parameters
  • ods – input ods

  • update – operate in place

  • enforce_quasineutrality – update electron density to be quasineutral with ions

Returns

updated ods

physics_core_profiles_pressures(update=True)

Calculates individual ions pressures

core_profiles.profiles_1d.:.ion.:.pressure_thermal #Pressure (thermal) associated with random motion ~average((v-average(v))^2) core_profiles.profiles_1d.:.ion.:.pressure #Pressure (thermal+non-thermal)

as well as total pressures

core_profiles.profiles_1d.:.pressure_thermal #Thermal pressure (electrons+ions) core_profiles.profiles_1d.:.pressure_ion_total #Total (sum over ion species) thermal ion pressure core_profiles.profiles_1d.:.pressure_perpendicular #Total perpendicular pressure (electrons+ions, thermal+non-thermal) core_profiles.profiles_1d.:.pressure_parallel #Total parallel pressure (electrons+ions, thermal+non-thermal)

NOTE: the fast particles ion pressures are read, not set by this function:

core_profiles.profiles_1d.:.ion.:.pressure_fast_parallel #Pressure (thermal) associated with random motion ~average((v-average(v))^2) core_profiles.profiles_1d.:.ion.:.pressure_fast_perpendicular #Pressure (thermal+non-thermal)

Parameters
  • ods – input ods

  • update – operate in place

Returns

updated ods

physics_core_profiles_zeff(update=True, use_electrons_density=False, enforce_quasineutrality=False)

calculates effective charge

Parameters
  • ods – input ods

  • update – operate in place

  • use_electrons_density – denominator core_profiles.profiles_1d.:.electrons.density instead of sum Z*n_i

  • enforce_quasineutrality – update electron density to be quasineutral with ions

Returns

updated ods

physics_core_sources_j_parallel_sum(time_index=0)

ods function used to sum all j_parallel contributions from core_sources (j_actuator)

Parameters
  • ods – input ods

  • time_index – time slice to process

Returns

sum of j_parallel in [A/m^2]

physics_current_from_eq(time_index)

This function sets the currents in ods[‘core_profiles’][‘profiles_1d’][time_index] using ods[‘equilibrium’][‘time_slice’][time_index][‘profiles_1d’][‘j_tor’]

Parameters
  • ods – ODS to update in-place

  • time_index – ODS time index to updated

if None, all times are updated

physics_derive_equilibrium_profiles_2d_quantity(time_index, grid_index, quantity)

This function derives values of empty fields in prpfiles_2d from other parameters in the equilibrium ods Currently only the magnetic field components are supported

Parameters
  • ods – input ods

  • time_index – time slice to process

  • grid_index – Index of grid to map

  • quantity – Member of profiles_2d to be derived

Returns

updated ods

physics_equilibrium_consistent()

Calculate missing derived quantities for equilibrium IDS

Parameters

ods – ODS to update in-place

Returns

updated ods

physics_equilibrium_form_constraints(times=None, default_average=0.02, constraints=None, averages=None, cutoff_hz=None, rm_integr_drift_after=None, update=True, **nuconv_kw)

generate equilibrium constraints from experimental data in ODS

Parameters
  • ods – input ODS

  • times – list of times at which to generate the constraints

  • default_average – default averaging time

  • constraints

    list of constraints to be formed (if experimental data is available) NOTE: only the constraints marked with OK are supported at this time:

    OK b_field_tor_vacuum_r
    OK bpol_probe
    OK diamagnetic_flux
     * faraday_angle
    OK flux_loop
    OK ip
     * iron_core_segment
     * mse_polarisation_angle
     * n_e
     * n_e_line
    OK pf_current
     * pf_passive_current
     * pressure
     * q
     * strike_point
     * x_point
    

  • averages – dictionary with average times for individual constraints Smoothed using Gaussian, sigma=averages/4. and the convolution is integrated across +/-4.*sigma.

  • cutoff_hz – a list of two elements with low and high cutoff frequencies [lowFreq, highFreq]

  • rm_integr_drift_after – time in ms after which is assumed thet all currents are zero and signal should be equal to zero. Used for removing of the integrators drift

  • update – operate in place

Returns

updated ods

physics_equilibrium_ggd_to_rectangular(time_index=None, resolution=None, method='linear', update=True)

Convert GGD data to profiles 2D

Parameters
  • ods – input ods

  • time_index – time slices to process

  • resolution – integer or tuple for rectangular grid resolution

  • method – one of ‘nearest’, ‘linear’, ‘cubic’, ‘extrapolate’

  • update – operate in place

Returns

updated ods

physics_equilibrium_profiles_2d_map(time_index, grid_index, quantity, dim1=None, dim2=None, cache=None, return_cache=False, out_of_bounds_value=nan)

This routines creates interpolators for quantities and stores them in the cache for future use. It can also be used to just return the current profile_2d quantity by omitting dim1 and dim2. At the moment this routine always extrapolates for data outside the defined grid range.

Parameters
  • ods – input ods

  • time_index – time slices to process

  • grid_index – Index of grid to map

  • quantity – Member of profiles_2d[:] to map

  • dim1 – First coordinate of the points to map to

  • dim2 – Second coordinate of the points to map to

  • cache – Cache to store interpolants in

  • return_cache – Toggles return of cache

Returns

mapped positions (and cahce if return_cache)

physics_equilibrium_stored_energy(update=True)

Calculate MHD stored energy from equilibrium pressure and volume

Parameters
  • ods – input ods

  • update – operate in place

Returns

updated ods

physics_equilibrium_transpose_RZ(flip_dims=False)

Transpose 2D grid values for RZ grids under equilibrium.time_slice.:.profiles_2d.:.

Parameters
  • ods – ODS to update in-place

  • flip_dims – whether to switch the equilibrium.time_slice.:.profiles_2d.:.grid.dim1 and dim1

Returns

updated ods

physics_imas_info()

add ids_properties.version_put… information

Returns

updated ods

physics_magnetics_sanitize(remove_bpol_probe=True)

Take data in legacy magnetics.bpol_probe and store it in current magnetics.b_field_pol_probe and magnetics.b_field_tor_probe

Parameters

ods – ODS to update in-place

Returns

updated ods

physics_remap_flux_coordinates(time_index, origin, destination, values)

Maps from one magnetic coordinate system to another. At the moment only supports psi <-> rho_pol :param ods: input ods

Parameters
  • time_index – time slices to process

  • origin – Specifier for original coordinate system

  • destination – Target coordinate system for output

  • values – Values to transform

Returns

Transformed values

physics_resolve_equilibrium_profiles_2d_grid_index(time_index, grid_identifier)

Convenience function to identify which of profiles_2d[:].grid_type.index matches the specified grid_identifier

Parameters
  • ods – input ods

  • time_index – time index to search

  • grid_identifier – grid type to be resolved

Returns

Index of grid the requested grid, not to be confused with profiles_2d[:].grid_type.index

physics_summary_consistent_global_quantities(ds=None, update=True)

Generate summary.global_quantities from global_quantities of other IDSs

Parameters
  • ods – input ods

  • ds – IDS from which to update summary.global_quantities. All IDSs if None.

  • update – operate in place

Returns

updated ods

physics_summary_currents(time_index=None, update=True)

Calculatess plasma currents from core_profiles for each time slice and stores them in the summary ods

Parameters
  • ods – input ods

  • time_index – time slices to process

  • update – operate in place

Returns

updated ods

physics_summary_global_quantities(update=True)
Calculates global quantities for each time slice and stores them in the summary ods:
  • Greenwald Fraction

  • Energy confinement time estimated from the IPB98(y,2) scaling

  • Integrate power densities to the totals

  • Generate summary.global_quantities from global_quantities of other IDSs

Parameters
  • ods – input ods

  • update – operate in place

Returns

updated ods

physics_summary_greenwald(update=True)

Calculates Greenwald Fraction for each time slice and stores them in the summary ods.

Parameters
  • ods – input ods

  • update – operate in place

Returns

updated ods

physics_summary_heating_power(update=True)

Integrate power densities to the total and heating and current drive systems and fills summary.global_quantities

Parameters
  • ods – input ods

  • update – operate in place

Returns

updated ods

physics_summary_lineaverage_density(line_grid=2000, time_index=None, update=True, doPlot=False)

Calculates line-average electron density for each time slice and stores them in the summary ods

Parameters
  • ods – input ods

  • line_grid – number of points to calculate line average density over (includes point outside of boundary)

  • time_index – time slices to process

  • update – operate in place

  • doPlot – plots the interferometer lines on top of the equilibrium boundary shape

Returns

updated ods

physics_summary_taue(thermal=True, update=True)

Calculates Energy confinement time estimated from the IPB98(y,2) scaling for each time slice and stores them in the summary ods

Parameters
  • ods – input ods

  • update – operate in place

Thermal

calculates the thermal part of the energy confinement time from core_profiles if True, otherwise use the stored energy MHD from the equilibrium ods

Returns

updated ods

physics_summary_thermal_stored_energy(update=True)

Calculates the stored energy based on the contents of core_profiles for all time-slices

Parameters
  • ods – input ods

  • update – operate in place

Returns

updated ods

physics_wall_add(machine=None)

Add wall information to the ODS

Parameters
  • ods – ODS to update in-place

  • machine – machine of which to load the wall (if None it is taken from ods[‘dataset_description.data_entry.machine’])

plot_bolometer_overlay(ax=None, reset_fan_color=True, colors=None, **kw)

Overlays bolometer chords

Parameters
  • ods – ODS instance

  • ax – axes instance into which to plot (default: gca())

  • reset_fan_color – bool At the start of each bolometer fan (group of channels), set color to None to let a new one be picked by the cycler. This will override manually specified color.

  • colors – list of matplotlib color specifications. Do not use a single RGBA style spec.

  • **kw

    Additional keywords for bolometer plot

    • Accepts standard omas_plot overlay keywords listed in overlay() documentation: mask, labelevery, …

    • Remaining keywords are passed to plot call for drawing lines for the bolometer sightlines

plot_charge_exchange_overlay(ax=None, which_pos='closest', **kw)

Overlays Charge Exchange Recombination (CER) spectroscopy channel locations

Parameters
  • ods – OMAS ODS instance

  • ax – axes instance into which to plot (default: gca())

  • which_pos

    string ‘all’: plot all valid positions this channel uses. This can vary in time depending on which beams are on.

    ’closest’: for each channel, pick the time slice with valid data closest to the time used for the

    equilibrium contours and show position at this time. Falls back to all if equilibrium time cannot be read from time_slice 0 of equilibrium in the ODS.

  • **kw

    Additional keywords for CER plot:

    color_tangential: color to use for tangentially-viewing channels

    color_vertical: color to use for vertically-viewing channels

    color_radial: color to use for radially-viewing channels

    marker_tangential, marker_vertical, marker_radial: plot symbols to use for T, V, R viewing channels

    • Accepts standard omas_plot overlay keywords listed in overlay() documentation: mask, labelevery, …

    • Remaining keywords are passed to plot call

plot_core_profiles_currents_summary(time_index=None, time=None, ax=None, **kw)

Plot currents in core_profiles_1d

Parameters
  • ods – input ods

  • fig – figure to plot in (a new figure is generated if fig is None)

  • time_index – int, list of ints, or None time slice to plot. If None all timeslices are plotted.

  • time – float, list of floats, or None time to plot. If None all timeslicess are plotted. if not None, it takes precedence over time_index

plot_core_profiles_pressures(time_index=None, time=None, ax=None, **kw)

Plot pressures in ods[‘core_profiles’][‘profiles_1d’][time_index]

Parameters
  • ods – input ods

  • time_index – int, list of ints, or None time slice to plot. If None all timeslices are plotted.

  • time – float, list of floats, or None time to plot. If None all timeslicess are plotted. if not None, it takes precedence over time_index

  • ax – axes to plot in (active axes is generated if ax is None)

  • kw – arguments passed to matplotlib plot statements

Returns

axes handler

plot_core_profiles_summary(time_index=None, time=None, fig=None, ods_species=None, quantities=['density_thermal', 'temperature'], x_axis='rho_tor_norm', **kw)

Plot densities and temperature profiles for electrons and all ion species as per ods[‘core_profiles’][‘profiles_1d’][time_index]

Parameters
  • ods – input ods

  • fig – figure to plot in (a new figure is generated if fig is None)

  • time_index – int, list of ints, or None time slice to plot. If None all timeslices are plotted.

  • time – float, list of floats, or None time to plot. If None all timeslicess are plotted. if not None, it takes precedence over time_index

  • ods_species – list of ion specie indices as listed in the core_profiles ods (electron index = -1) if None selected plot all the ion speciess

  • quantities – list of strings to plot from the profiles_1d ods like zeff, temperature & rotation_frequency_tor_sonic

  • kw – arguments passed to matplotlib plot statements

Returns

figure handler

plot_core_sources_summary(time_index=None, time=None, fig=None, **kw)

Plot sources for electrons and all ion species

Parameters
  • ods – input ods

  • time_index – int, list of ints, or None time slice to plot. If None all timeslices are plotted.

  • time – float, list of floats, or None time to plot. If None all timeslicess are plotted. if not None, it takes precedence over time_index

  • fig – figure to plot in (a new figure is generated if fig is None)

  • kw – arguments passed to matplotlib plot statements

Returns

axes

plot_core_transport_fluxes(time_index=None, time=None, fig=None, show_total_density=True, plot_zeff=False, **kw)

Plot densities and temperature profiles for all species, rotation profile, TGYRO fluxes and fluxes from power_balance per STEP state.

Parameters
  • ods – input ods

  • time_index – int, list of ints, or None time slice to plot. If None all timeslices are plotted.

  • time – float, list of floats, or None time to plot. If None all timeslicess are plotted. if not None, it takes precedence over time_index

  • fig – figure to plot in (a new figure is generated if fig is None)

  • show_total_density – bool Show total thermal+fast in addition to thermal/fast breakdown if available

  • plot_zeff – if True, plot zeff below the plasma rotation

Kw

matplotlib plot parameters

Returns

axes

plot_ec_launchers_CX(time_index=None, time=None, ax=None, beam_trajectory=None, **kw)

Plot EC launchers in poloidal cross-section

Parameters
  • ods – input ods

  • time_index – int, list of ints, or None time slice to plot. If None all timeslices are plotted.

  • time – float, list of floats, or None time to plot. If None all timeslicess are plotted. if not None, it takes precedence over time_index

  • ax – axes to plot in (active axes is generated if ax is None)

  • kw – arguments passed to matplotlib plot statements

  • beam_trajectory – length of launcher on plot

Returns

axes handler

plot_ec_launchers_CX_topview(time_index=None, time=None, ax=None, beam_trajectory=None, **kw)

Plot EC launchers in toroidal cross-section

Parameters
  • ods – input ods

  • time_index – int, list of ints, or None time slice to plot. If None all timeslices are plotted.

  • time – float, list of floats, or None time to plot. If None all timeslicess are plotted. if not None, it takes precedence over time_index

  • ax – axes to plot in (active axes is generated if ax is None)

  • kw – arguments passed to matplotlib plot statements

  • beam_trajectory – length of launcher on plot

Returns

axes handler

plot_equilibrium_CX(time_index=None, time=None, levels=None, contour_quantity='rho_tor_norm', allow_fallback=True, ax=None, sf=3, label_contours=None, show_wall=True, xkw={}, ggd_points_triangles=None, **kw)

Plot equilibrium cross-section as per ods[‘equilibrium’][‘time_slice’][time_index]

Parameters
  • ods – ODS instance input ods containing equilibrium data

  • time_index – int, list of ints, or None time slice to plot. If None all timeslices are plotted.

  • time – float, list of floats, or None time to plot. If None all timeslicess are plotted. if not None, it takes precedence over time_index

  • levels – sorted numeric iterable values to pass to 2D plot as contour levels

  • contour_quantity – string quantity to contour, anything in eq[‘profiles_1d’] or eq[‘profiles_2d’] or psi_norm

  • allow_fallback – bool If rho/phi is requested but not available, plot on psi instead if allowed. Otherwise, raise ValueError.

  • ax – Axes instance axes to plot in (active axes is generated if ax is None)

  • sf – int Resample scaling factor. For example, set to 3 to resample to 3x higher resolution. Makes contours smoother.

  • label_contours – bool or None True/False: do(n’t) label contours None: only label if contours are of q

  • show_wall – bool Plot the inner wall or limiting surface, if available

  • xkw – dict Keywords to pass to plot call to draw X-point(s). Disable X-points by setting xkw={‘marker’: ‘’}

  • ggd_points_triangles – Caching of ggd data structure as generated by omas_physics.grids_ggd_points_triangles() method

  • **kw – keywords passed to matplotlib plot statements

Returns

Axes instance

plot_equilibrium_CX_topview(time_index=None, time=None, ax=None, **kw)

Plot equilibrium toroidal cross-section as per ods[‘equilibrium’][‘time_slice’][time_index]

Parameters
  • ods – ODS instance input ods containing equilibrium data

  • time_index – int, list of ints, or None time slice to plot. If None all timeslices are plotted.

  • time – float, list of floats, or None time to plot. If None all timeslicess are plotted. if not None, it takes precedence over time_index

  • ax – Axes instance [optional] axes to plot in (active axes is generated if ax is None)

  • **kw – arguments passed to matplotlib plot statements

Returns

Axes instance

plot_equilibrium_quality(fig=None, **kw)

Plot equilibrium convergence error and total Chi-squared as a function of time

Parameters
  • ods – input ods

  • fig – figure to plot in (a new figure is generated if fig is None)

plot_equilibrium_summary(time_index=None, time=None, fig=None, ggd_points_triangles=None, omas_viewer=False, **kw)

Plot equilibrium cross-section and P, q, P’, FF’ profiles as per ods[‘equilibrium’][‘time_slice’][time_index]

Parameters
  • ods – input ods

  • time_index – int, list of ints, or None time slice to plot. If None all timeslices are plotted.

  • time – float, list of floats, or None time to plot. If None all timeslicess are plotted. if not None, it takes precedence over time_index

  • fig – figure to plot in (a new figure is generated if fig is None)

  • ggd_points_triangles – Caching of ggd data structure as generated by omas_physics.grids_ggd_points_triangles() method

  • kw – arguments passed to matplotlib plot statements

Returns

figure handler

plot_gas_injection_overlay(ax=None, angle_not_in_pipe_name=False, which_gas='all', show_all_pipes_in_group=True, simple_labels=False, label_spacer=0, colors=None, draw_arrow=True, **kw)

Plots overlays of gas injectors

Parameters
  • ods – OMAS ODS instance

  • ax – axes instance into which to plot (default: gca())

  • angle_not_in_pipe_name – bool Set this to include (Angle) at the end of injector labels. Useful if injector/pipe names don’t already include angles in them.

  • which_gas

    string or list Filter for selecting which gas pipes to display.

    • If string: get a preset group, like ‘all’.

    • If list: only pipes in the list will be shown. Abbreviations are tolerated; e.g. GASA is recognized as GASA_300. One abbreviation can turn on several pipes. There are several injection location names starting with RF_ on DIII-D, for example.

  • show_all_pipes_in_group – bool Some pipes have the same R,Z coordinates of their exit positions (but different phi locations) and will appear at the same location on the plot. If this keyword is True, labels for all the pipes in such a group will be displayed together. If it is False, only the first one in the group will be labeled.

  • simple_labels – bool Simplify labels by removing suffix after the last underscore.

  • label_spacer – int Number of blank lines and spaces to insert between labels and symbol

  • colors – list of matplotlib color specifications. These colors control the display of various gas ports. The list will be repeated to make sure it is long enough. Do not specify a single RGB tuple by itself. However, a single tuple inside list is okay [(0.9, 0, 0, 0.9)]. If the color keyword is used (See **kw), then color will be popped to set the default for colors in case colors is None.

  • draw_arrow – bool or dict Draw an arrow toward the machine at the location of the gas inlet. If dict, pass keywords to arrow drawing func.

  • **kw

    Additional keywords for gas plot:

    • Accepts standard omas_plot overlay keywords listed in overlay() documentation: mask, labelevery, …

    • Remaining keywords are passed to plot call for drawing markers at the gas locations.

plot_interferometer_overlay(ax=None, **kw)

Plots overlays of interferometer chords.

Parameters
  • ods – OMAS ODS instance

  • ax – axes instance into which to plot (default: gca())

  • **kw

    Additional keywords

    • Accepts standard omas_plot overlay keywords listed in overlay() documentation: mask, labelevery, …

    • Remaining keywords are passed to plot call

plot_langmuir_probes_overlay(ax=None, embedded_probes=None, colors=None, show_embedded=True, show_reciprocating=False, **kw)

Overlays Langmuir probe locations

Parameters
  • ods – ODS instance Must contain langmuir_probes with embedded position data

  • ax – Axes instance

  • embedded_probes – list of strings Specify probe names to use. Only the embedded probes listed will be plotted. Set to None to plot all probes. Probe names are like ‘F11’ or ‘P-6’ (the same as appear on the overlay).

  • colors – list of matplotlib color specifications. Do not use a single RGBA style spec.

  • show_embedded – bool Recommended: don’t enable both embedded and reciprocating plots at the same time; make two calls instead. It will be easier to handle mapping of masks, colors, etc.

  • show_reciprocating – bool

  • **kw

    Additional keywords.

    • Accepts standard omas_plot overlay keywords listed in overlay() documentation: mask, labelevery, …

    • Others will be passed to the plot() call for drawing the probes.

plot_lh_antennas_CX(time_index=None, time=None, ax=None, antenna_trajectory=None, **kw)

Plot LH antenna position in poloidal cross-section

Parameters
  • ods – input ods

  • time_index – int, list of ints, or None time slice to plot. If None all timeslices are plotted.

  • time – float, list of floats, or None time to plot. If None all timeslicess are plotted. if not None, it takes precedence over time_index

  • ax – axes to plot in (active axes is generated if ax is None)

  • antenna_trajectory – length of antenna on plot

  • kw – arguments passed to matplotlib plot statements

Returns

axes handler

plot_lh_antennas_CX_topview(time_index=None, time=None, ax=None, antenna_trajectory=None, **kw)

Plot LH antenna in toroidal cross-section

Parameters
  • ods – input ods

  • time_index – int, list of ints, or None time slice to plot. If None all timeslices are plotted.

  • time – float, list of floats, or None time to plot. If None all timeslicess are plotted. if not None, it takes precedence over time_index

  • ax – axes to plot in (active axes is generated if ax is None)

  • kw – arguments passed to matplotlib plot statements

  • antenna_trajectory – length of antenna on plot

Returns

axes handler

plot_magnetics_bpol_probe_data(equilibrium_constraints=True, ax=None, **kw)

plot bpol_probe time traces and equilibrium constraints

Parameters
  • equilibrium_constraints – plot equilibrium constraints if present

  • ax – Axes instance [optional] axes to plot in (active axes is generated if ax is None)

  • **kw – Additional keywords for plot

Returns

axes instance

plot_magnetics_diamagnetic_flux_data(equilibrium_constraints=True, ax=None, **kw)

plot diamagnetic_flux time trace and equilibrium constraint

Parameters
  • equilibrium_constraints – plot equilibrium constraints if present

  • ax – Axes instance [optional] axes to plot in (active axes is generated if ax is None)

  • **kw – Additional keywords for plot

Returns

axes instance

plot_magnetics_flux_loop_data(equilibrium_constraints=True, ax=None, **kw)

plot flux_loop time traces and equilibrium constraints

Parameters
  • equilibrium_constraints – plot equilibrium constraints if present

  • ax – Axes instance [optional] axes to plot in (active axes is generated if ax is None)

  • **kw – Additional keywords for plot

Returns

axes instance

plot_magnetics_ip_data(equilibrium_constraints=True, ax=None, **kw)

plot ip time trace and equilibrium constraint

Parameters
  • equilibrium_constraints – plot equilibrium constraints if present

  • ax – Axes instance [optional] axes to plot in (active axes is generated if ax is None)

  • **kw – Additional keywords for plot

Returns

axes instance

plot_magnetics_overlay(ax=None, show_flux_loop=True, show_bpol_probe=True, show_btor_probe=True, flux_loop_style={'marker': 's'}, pol_probe_style={}, tor_probe_style={'marker': '.'}, **kw)

Plot magnetics on a tokamak cross section plot

Parameters
  • ods – OMAS ODS instance

  • flux_loop_style – dictionary with matplotlib options to render flux loops

  • pol_probe_style – dictionary with matplotlib options to render poloidal magnetic probes

  • tor_probe_style – dictionary with matplotlib options to render toroidal magnetic probes

  • ax – axes to plot in (active axes is generated if ax is None)

Returns

axes handler

plot_nbi_summary(ax=None)

Plot summary of NBI power time traces

Parameters
  • ods – input ods

  • ax – axes to plot in (active axes is generated if ax is None)

Returns

axes handler

plot_overlay(ax=None, allow_autoscale=True, debug_all_plots=False, return_overlay_list=False, **kw)

Plots overlays of hardware/diagnostic locations on a tokamak cross section plot

Parameters
  • ods – OMAS ODS instance

  • ax – axes instance into which to plot (default: gca())

  • allow_autoscale – bool Certain overlays will be allowed to unlock xlim and ylim, assuming that they have been locked by equilibrium_CX. If this option is disabled, then hardware systems like PF-coils will be off the plot and mostly invisible.

  • debug_all_plots – bool Individual hardware systems are on by default instead of off by default.

  • return_overlay_list – Return list of possible overlays that could be plotted

  • **kw

    additional keywords for selecting plots.

    • Select plots by setting their names to True; e.g.: if you want the gas_injection plot, set gas_injection=True as a keyword. If debug_all_plots is True, then you can turn off individual plots by, for example, set_gas_injection=False.

    • Instead of True to simply turn on an overlay, you can pass a dict of keywords to pass to a particular overlay method, as in thomson={‘labelevery’: 5}. After an overlay pops off its keywords, remaining keywords are passed to plot, so you can set linestyle, color, etc.

    • Overlay functions accept these standard keywords:
      • mask: bool array

        Set of flags for switching plot elements on/off. Must be equal to the number of channels or items to be plotted.

      • labelevery: int

        Sets how often to add labels to the plot. A setting of 0 disables labels, 1 labels every element, 2 labels every other element, 3 labels every third element, etc.

      • notesize: matplotlib font size specification

        Applies to annotations drawn on the plot. Examples: ‘xx-small’, ‘medium’, 16

      • label_ha: None or string or list of (None or string) instances

        Descriptions of how labels should be aligned horizontally. Either provide a single specification or a list of specs matching or exceeding the number of labels expected. Each spec should be: ‘right’, ‘left’, or ‘center’. None (either as a scalar or an item in the list) will give default alignment for the affected item(s).

      • label_va: None or string or list of (None or string) instances

        Descriptions of how labels should be aligned vertically. Either provide a single specification or a list of specs matching or exceeding the number of labels expected. Each spec should be: ‘top’, ‘bottom’, ‘center’, ‘baseline’, or ‘center_baseline’. None (either as a scalar or an item in the list) will give default alignment for the affected item(s).

      • label_r_shift: float or float array/list.

        Add an offset to the R coordinates of all text labels for the current hardware system. (in data units, which would normally be m) Scalar: add the same offset to all labels. Iterable: Each label can have its own offset.

        If the list/array of offsets is too short, it will be padded with 0s.

      • label_z_shift: float or float array/list

        Add an offset to the Z coordinates of all text labels for the current hardware system (in data units, which would normally be m) Scalar: add the same offset to all labels. Iterable: Each label can have its own offset.

        If the list/array of offsets is too short, it will be padded with 0s.

      • Additional keywords are passed to the function that does the drawing; usually matplotlib.axes.Axes.plot().

Returns

axes handler

plot_pellets_trajectory_CX(time_index=None, time=None, ax=None, **kw)

Plot pellets trajectory in poloidal cross-section

Parameters
  • ods – input ods

  • time_index – int, list of ints, or None time slice to plot. If None all timeslices are plotted.

  • time – float, list of floats, or None time to plot. If None all timeslicess are plotted. if not None, it takes precedence over time_index

  • ax – axes to plot in (active axes is generated if ax is None)

  • kw – arguments passed to matplotlib plot statements

Returns

axes handler

plot_pellets_trajectory_CX_topview(time_index=None, time=None, ax=None, **kw)

Plot pellet trajectory in toroidal cross-section

Parameters
  • ods – input ods

  • time_index – int, list of ints, or None time slice to plot. If None all timeslices are plotted.

  • time – float, list of floats, or None time to plot. If None all timeslicess are plotted. if not None, it takes precedence over time_index

  • ax – axes to plot in (active axes is generated if ax is None)

  • kw – arguments passed to matplotlib plot statements

Returns

axes handler

plot_pf_active_data(equilibrium_constraints=True, ax=None, **kw)

plot pf_active time traces

Parameters
  • equilibrium_constraints – plot equilibrium constraints if present

  • ax – Axes instance [optional] axes to plot in (active axes is generated if ax is None)

  • **kw – Additional keywords for plot

Returns

axes instance

plot_pf_active_overlay(ax=None, **kw)

Plots overlays of active PF coils. INCOMPLETE: only the oblique geometry definition is treated so far. More should be added later.

Parameters
  • ods – OMAS ODS instance

  • ax – axes instance into which to plot (default: gca())

  • **kw

    Additional keywords scalex, scaley: passed to ax.autoscale_view() call at the end

    • Accepts standard omas_plot overlay keywords listed in overlay() documentation: mask, labelevery, …

    • Remaining keywords are passed to matplotlib.patches.Polygon call

      Hint: you may want to set facecolor instead of just color

plot_position_control_overlay(ax=None, t=None, xpoint_marker='x', strike_marker='s', labels=None, measured_xpoint_marker='+', show_measured_xpoint=False, **kw)

Overlays position_control data

Parameters
  • ods – ODS instance Must contain langmuir_probes with embedded position data

  • ax – Axes instance

  • t – float Time to display in seconds. If not specified, defaults to the average time of position control samples.

  • xpoint_marker – string Matplotlib marker spec for X-point target(s)

  • strike_marker – string Matplotlib marker spec for strike point target(s)

  • labels – list of strings [optional] Override default point labels. Length must be long enough to cover all points.

  • show_measured_xpoint – bool In addition to the target X-point, mark the measured X-point coordinates.

  • measured_xpoint_marker – string Matplotlib marker spec for X-point measurement(s)

  • **kw

    Additional keywords.

    • Accepts standard omas_plot overlay keywords listed in overlay() documentation: mask, labelevery, …

    • Others will be passed to the plot() call for drawing shape control targets

plot_pulse_schedule_overlay(ax=None, t=None, **kw)

Overlays relevant data from pulse_schedule, such as position control

Parameters
  • ods – ODS instance Must contain langmuir_probes with embedded position data

  • ax – Axes instance

  • t – float Time in s

  • **kw

    Additional keywords.

    • Accepts standard omas_plot overlay keywords listed in overlay() documentation: mask, labelevery, …

    • Others will be passed to the plot() calls.

plot_quantity(key, yname=None, xname=None, yunits=None, xunits=None, ylabel=None, xlabel=None, label=None, xnorm=1.0, ynorm=1.0, ax=None, **kw)

Provides convenient way to plot 1D quantities in ODS

For example:
>>> ods.plot_quantity('@core.*elec.*dens', '$n_e$', lw=2)
>>> ods.plot_quantity('@core.*ion.0.*dens.*th', '$n_D$', lw=2)
>>> ods.plot_quantity('@core.*ion.1.*dens.*th', '$n_C$', lw=2)
Parameters
  • ods – ODS instance

  • key – ODS location or search pattern

  • yname – name of the y quantity

  • xname – name of the x quantity

  • yunits – units of the y quantity

  • xunits – units of the x quantity

  • ylabel – plot ylabel

  • xlabel – plot xlabel

  • ynorm – normalization factor for y

  • xnorm – normalization factor for x

  • label – label for the legend

  • ax – axes instance into which to plot (default: gca())

  • **kw – extra arguments are passed to the plot function

Returns

axes instance

plot_summary(fig=None, quantity=None, **kw)

Plot summary time traces. Internally makes use of plot_quantity method.

Parameters
  • ods – input ods

  • fig – figure to plot in (a new figure is generated if fig is None)

  • quantity – if None plot all time-dependent global_quantities. Else a list of strings with global quantities to plot

Returns

list of axes

plot_tf_b_field_tor_vacuum_r_data(equilibrium_constraints=True, ax=None, **kw)

plot b_field_tor_vacuum_r time trace and equilibrium constraint

Parameters
  • equilibrium_constraints – plot equilibrium constraints if present

  • ax – Axes instance [optional] axes to plot in (active axes is generated if ax is None)

  • **kw – Additional keywords for plot

Returns

axes instance

plot_thomson_scattering_overlay(ax=None, **kw)

Overlays Thomson channel locations

Parameters
  • ods – OMAS ODS instance

  • ax – axes instance into which to plot (default: gca())

  • **kw

    Additional keywords for Thomson plot:

    • Accepts standard omas_plot overlay keywords listed in overlay() documentation: mask, labelevery, …

    • Remaining keywords are passed to plot call

plot_wall_overlay(ax=None, component_index=None, types=['limiter', 'mobile', 'vessel'], unit_index=None, **kw)

Plot walls on a tokamak cross section plot

Parameters
  • ods – OMAS ODS instance

  • ax – axes instance into which to plot (default: gca())

  • component_index – list of index of components to plot

  • types – list with one or more of [‘limiter’,’mobile’,’vessel’]

  • unit_index – list of index of units of the component to plot

Returns

axes handler

plot_waves_beam_CX(time_index=None, time=None, ax=None, **kw)

Plot waves beams in poloidal cross-section

Parameters
  • ods – input ods

  • time_index – int, list of ints, or None time slice to plot. If None all timeslices are plotted.

  • time – float, list of floats, or None time to plot. If None all timeslicess are plotted. if not None, it takes precedence over time_index

  • ax – axes to plot in (active axes is generated if ax is None)

  • kw – arguments passed to matplotlib plot statements

Returns

axes handler

plot_waves_beam_profile(time_index=None, time=None, what='power_density', ax=None, **kw)

Plot 1d profiles of waves beams given quantity

Parameters
  • ods – input ods

  • time_index – int, list of ints, or None time slice to plot. If None all timeslices are plotted.

  • time – float, list of floats, or None time to plot. If None all timeslicess are plotted. if not None, it takes precedence over time_index

  • quantity – quantity to plot

  • ax – axes to plot in (active axes is generated if ax is None)

  • kw – arguments passed to matplotlib plot statements

Returns

axes handler

plot_waves_beam_summary(time_index=None, time=None, fig=None, **kw)

Plot waves beam summary: CX, power_density, and current_parallel_density

Parameters
  • ods – input ods

  • time_index – int, list of ints, or None time slice to plot. If None all timeslices are plotted.

  • time – float, list of floats, or None time to plot. If None all timeslicess are plotted. if not None, it takes precedence over time_index

  • fig – figure to plot in (a new figure is generated if fig is None)

  • kw – arguments passed to matplotlib plot statements

Returns

figure handler

sample_bolometer(nc=10)

Adds fake bolometer chord locations This method operates in place

Parameters
  • ods – ODS instance

  • nc – 10 # Number of fake channels to make up for testing

Returns

ODS instance with fake bolometer hardware information added

sample_charge_exchange(nc=10)

Adds fake CER channel locations This method operates in-place

Parameters
  • ods – ODS instance

  • nc – Number of channels to add

Returns

ODS instance with fake CER hardware information added

sample_core_profiles(time_index=0, add_junk_ion=False, include_pressure=True)

Add sample core_profiles data This method operates in in-place.

Parameters
  • ods – ODS instance

  • time_index – int

  • add_junk_ion – bool Flag for adding a junk ion for testing how well functions tolerate problems. This will be missing labels, etc.

  • include_pressure – bool Include pressure profiles when temperature and density are added

Returns

ODS instance with profiles added

sample_core_sources(time_index=0)

Add sample core_profiles data This method operates in in-place.

Parameters
  • ods – ODS instance

  • time_index – int

Returns

ODS instance with sources added

sample_core_transport(time_index=0)

Add sample core_profiles data This method operates in in-place

Parameters
  • ods – ODS instance

  • time_index – int

Returns

ODS instance with sources added

sample_dataset_description()
sample_ec_launchers(ngyros=2, ntimes=6)

Adds fake ec launchers data to support testing This method operates in in-place

Parameters
  • ods – ODS instance

  • ngyros – number of gyrotrons

  • ntimes – number of times

Returns

ODS instance with added ec_launchers

sample_equilibrium(time_index=0, include_profiles=True, include_phi=True, include_psi=True, include_wall=True, include_q=True, include_xpoint=False)

Add sample equilibrium data This method operates in in-place.

Parameters
  • ods – ODS instance

  • time_index – int Under which time index should fake equilibrium data be loaded?

  • include_profiles – bool Include 1D profiles of pressure, q, p’, FF’ They are in the sample set, so not including them means deleting them.

  • include_phi – bool Include 1D and 2D profiles of phi (toroidal flux, for calculating rho) This is in the sample set, so not including it means deleting it.

  • include_psi – bool Include 1D and 2D profiles of psi (poloidal flux) This is in the sample set, so not including it means deleting it.

  • include_wall – bool Include the first wall This is in the sample set, so not including it means deleting it.

  • include_q – bool Include safety factor This is in the sample set, so not including it means deleting it.

  • include_xpoint – bool Include X-point R-Z coordinates This is not in the sample set, so including it means making it up

Returns

ODS instance with equilibrium data added

sample_gas_injection()

Adds fake gas injection locations This method operates in place

Parameters

ods – ODS instance

Returns

ODS instance with fake gas injection hardware information added

sample_ic_antennas()

Add sample ic_antennas data This method operates in in-place.

Parameters

ods – ODS instance

Returns

ODS instance with profiles added

sample_interferometer()

Adds fake interferometer locations This method operates in place

Parameters

ods – ODS instance

Returns

ODS instance with fake interferometer hardware information addedd

sample_langmuir_probes()

Adds fake Langmuir probe locations This method operates in place

Parameters

ods – ODS instance

Returns

ODS instance with fake Langmuir probe hardware information added

sample_magnetics()

Adds fake magnetic probe locations This method operates in place

Parameters

ods – ODS instance

Returns

ODS instance with fake magnetics hardware information added

sample_nbi(nunits=2, ntimes=6)

Adds fake nbi data to support testing This method operates in in-place

Parameters
  • ods – ODS instance

  • nunits – number of times

  • ntimes – number of times

Returns

ODS instance with added nbi

sample_pf_active(nc_weird=0, nc_undefined=0)

Adds fake active PF coil locations This method operates in in-place

Parameters
  • ods – ODS instance

  • nc_weird – int Number of coils with badly defined geometry to include for testing plot overlay robustness

  • nc_undefined – int Number of coils with undefined geometry_type (But valid r, z outlines) to include for testing plot overlay robustness.

Returns

ODS instance with PF active hardware information added

sample_pulse_schedule()

Adds fake control target data to support testing This method operates in in-place

Parameters

ods – ODS instance

Returns

ODS instance with added pulse schedule

sample_summary()

Add sample core_profiles data This method operates in in-place

Parameters

ods – ODS instance

Returns

ODS instance with sources added

sample_thomson_scattering(nc=10)

Adds fake Thomson scattering channel locations This method operates in place

Parameters
  • ods – ODS instance

  • nc – Number of channels to add.

Returns

ODS instance with fake Thomson hardware information added

sample_wall()

Adds fake wall data This method operates in in-place

Parameters

ods – ODS instance

Returns

ODS instance with added wall description

class omfit_classes.omfit_omas.ODC(*args, **kw)[source]

Bases: omas.omas_core.ODS

OMAS Data Collection class

Parameters
  • imas_version – IMAS version to use as a constrain for the nodes names

  • consistency_check – whether to enforce consistency with IMAS schema

  • cocos – internal COCOS representation (this can only be set when the object is created)

  • cocosio – COCOS representation of the data that is read/written from/to the ODS

  • coordsio – ODS with coordinates to use for the data that is read/written from/to the ODS

  • unitsio – ODS will return data with units if True

  • uncertainio – ODS will return data with uncertainties if True

  • dynamic – internal keyword used for dynamic data loading

property consistency_check

property that returns whether consistency with IMAS schema is enabled or not

Returns

True/False/’warn’/’drop’/’strict’ or a combination of those strings

same_init_ods(cls=None)[source]

Initializes a new ODS with the same attributes as this one

Returns

new ODS

keys(dynamic=True)[source]

Return list of keys

Parameters

dynamic – whether dynamic loaded key should be shown. This is True by default because this should be the case for calls that are facing the user. Within the inner workings of OMAS we thus need to be careful and keep track of when this should not be the case. Throughout the library we use dynamic=1 or dynamic=0 for debug purposes, since one can place a conditional breakpoint in this function checking if dynamic is True and self.dynamic to verfy that indeed the dynamic=True calls come from the user and not from within the library itself.

Returns

list of keys

save(*args, **kw)[source]

Save OMAS data

Parameters
  • filename – filename.XXX where the extension is used to select save format method (eg. ‘pkl’,’nc’,’h5’,’ds’,’json’,’ids’) set to imas, s3, hdc, mongo for load methods that do not have a filename with extension

  • *args – extra arguments passed to save_omas_XXX() method

  • **kw – extra keywords passed to save_omas_XXX() method

Returns

return from save_omas_XXX() method

load(*args, **kw)[source]

Load OMAS data

Parameters
  • filename – filename.XXX where the extension is used to select load format method (eg. ‘pkl’,’nc’,’h5’,’ds’,’json’,’ids’) set to imas, s3, hdc, mongo for save methods that do not have a filename with extension

  • consistency_check – perform consistency check once the data is loaded

  • *args – extra arguments passed to load_omas_XXX() method

  • **kw – extra keywords passed to load_omas_XXX() method

Returns

ODS with loaded data

class omfit_classes.omfit_omas.ODX(DS=None)[source]

Bases: collections.abc.MutableMapping

OMAS data xarray class

save(*args, **kw)[source]
load(*args, **kw)[source]
to_ods(consistency_check=True)[source]

Generate a ODS from current ODX

Parameters

consistency_check – use consistency_check flag in ODS

Returns

ODS

class omfit_classes.omfit_omas.CodeParameters(string=None)[source]

Bases: dict

Class used to interface with IMAS code-parameters XML files

from_string(code_params_string)[source]

Load data from code.parameters XML string

Parameters

code_params_string – XML string

Returns

self

from_file(code_params_file)[source]

Load data from code.parameters XML file

Parameters

code_params_file – XML file

Returns

self

to_string()[source]

generate an XML string from this dictionary

Returns

XML string

getraw(key)[source]

Method to access data to CodeParameters with no processing of the key. Effectively behaves like a pure Python dictionary/list __getitem__. This method is mostly meant to be used in the inner workings of the CodeParameters class.

Parameters

key – string or integer

Returns

value

setraw(key, value)[source]

Method to assign data to CodeParameters with no processing of the key. Effectively behaves like a pure Python dictionary/list __setitem__. This method is mostly meant to be used in the inner workings of the CodeParameters class.

Parameters
  • key – string or integer

  • value – value to assign

Returns

value

update(value)[source]

Update CodeParameters NOTE: ODSs will be converted to CodeParameters classes

Parameters

value – dictionary structure

Returns

self

paths(**kw)[source]

Traverse the code parameters and return paths that have data

Returns

list of paths that have data

keys()[source]
Returns

keys as list

values()[source]
Returns

values as list

items()[source]
Returns

key-value pairs as list

flat(**kw)[source]

Flat dictionary representation of the data

Parameters

**kw – extra keywords passed to the path() method

Returns

OrderedDict with flat representation of the data

from_structure(structure, depth=0)[source]

Generate CodeParamters starting from a hierarchical structure made of dictionaries and lists

Parameters

structure – input structure

Returns

self

omfit_classes.omfit_omas.codeparams_xml_save(f)[source]

Decorator function to be used around the omas_save_XXX methods to enable saving of code.parameters as an XML string

omfit_classes.omfit_omas.codeparams_xml_load(f)[source]

Decorator function to be used around the omas_load_XXX methods to enable loading of code.parameters from an XML string

omfit_classes.omfit_omas.ods_sample(ntimes=1)[source]

Returns an ODS populated with all of the samples

Parameters

ntimes – number of time slices to generate

Returns

sample ods

omfit_classes.omfit_omas.different_ods(ods1, ods2, ignore_type=False, ignore_empty=False, ignore_keys=[], ignore_default_keys=True, rtol=1e-05, atol=1e-08)[source]

Checks if two ODSs have any difference and returns the string with the cause of the different

Parameters
  • ods1 – first ods to check

  • ods2 – second ods to check

  • ignore_type – ignore object type differences

  • ignore_empty – ignore emptry nodes

  • ignore_keys – ignore the following keys

  • ignore_default_keys – ignores the following keys from the comparison dataset_description.data_entry.user dataset_description.data_entry.run dataset_description.data_entry.machine dataset_description.ids_properties dataset_description.imas_version dataset_description.time ids_properties.homogeneous_time ids_properties.occurrence ids_properties.version_put.data_dictionary ids_properties.version_put.access_layer ids_properties.version_put.access_layer_language

rtol : The relative tolerance parameter

atol : The absolute tolerance parameter

Returns

string with reason for difference, or False otherwise

omfit_classes.omfit_omas.save_omas_pkl(ods, filename, **kw)[source]

Save ODS to Python pickle

Parameters
  • ods – OMAS data set

  • filename – filename to save to

  • kw – keywords passed to pickle.dump function

omfit_classes.omfit_omas.load_omas_pkl(filename, consistency_check=None, imas_version=None)[source]

Load ODS or ODC from Python pickle

Parameters
  • filename – filename to save to

  • consistency_check – verify that data is consistent with IMAS schema (skip if None)

  • imas_version – imas version to use for consistency check (leave original if None)

Returns

ods OMAS data set

omfit_classes.omfit_omas.through_omas_pkl(ods)[source]

Test save and load Python pickle

Parameters

ods – ods

Returns

ods

omfit_classes.omfit_omas.save_omas_json(ods, filename, objects_encode=None, **kw)[source]

Save an ODS to Json

Parameters
  • ods – OMAS data set

  • filename – filename or file descriptor to save to

  • objects_encode – how to handle non-standard JSON objects * True: encode numpy arrays, complex, and uncertain * None: numpy arrays as lists, encode complex, and uncertain * False: numpy arrays as lists, fail on complex, and uncertain

  • kw – arguments passed to the json.dumps method

omfit_classes.omfit_omas.load_omas_json(filename, consistency_check=True, imas_version='3.41.0', cls=<class 'omas.omas_core.ODS'>, **kw)[source]

Load ODS or ODC from Json

Parameters
  • filename – filename or file descriptor to load from

  • consistency_check – verify that data is consistent with IMAS schema

  • imas_version – imas version to use for consistency check

  • cls – class to use for loading the data

  • kw – arguments passed to the json.loads mehtod

Returns

OMAS data set

omfit_classes.omfit_omas.through_omas_json(ods, method='class_method')[source]

Test save and load OMAS Json

Parameters

ods – ods

Returns

ods

omfit_classes.omfit_omas.save_omas_mongo(ods, collection, database='omas', server='mongodb+srv://{user}:{pass}@omasdb.xymmt.mongodb.net')[source]

Save an ODS to MongoDB

Parameters
  • ods – OMAS data set

  • collection – collection name in the database

  • database – database name on the server

  • server – server name

Returns

unique _id identifier of the record

omfit_classes.omfit_omas.load_omas_mongo(find, collection, database='omas', server='mongodb+srv://{user}:{pass}@omasdb.xymmt.mongodb.net', consistency_check=True, imas_version='3.41.0', limit=None)[source]

Load an ODS from MongoDB

Parameters
  • find – dictionary to find data in the database

  • collection – collection name in the database

  • database – database name on the server

  • server – server name

  • consistency_check – verify that data is consistent with IMAS schema

  • imas_version – imas version to use for consistency check

  • limit – return at most limit number of results

Returns

list of OMAS data set that match find criterion

omfit_classes.omfit_omas.through_omas_mongo(ods, method='class_method')[source]

Test save and load OMAS MongoDB

Parameters

ods – ods

Returns

ods

omfit_classes.omfit_omas.save_omas_hdc(ods)[source]

Convert OMAS data structure to HDC

Parameters

ods – input data structure

Returns

HDC container

omfit_classes.omfit_omas.load_omas_hdc(hdc, consistency_check=True)[source]

Convert HDC data structure to OMAS

Parameters
  • hdc – input data structure

  • consistency_check – verify that data is consistent with IMAS schema

Returns

populated ODS

omfit_classes.omfit_omas.through_omas_hdc(ods, method='class_method')[source]

Test save and load HDC

Parameters

ods – ods

Returns

ods

omfit_classes.omfit_omas.save_omas_nc(ods, filename, **kw)[source]

Save an ODS to NetCDF file

Parameters
  • ods – OMAS data set

  • filename – filename to save to

  • kw – arguments passed to the netCDF4 Dataset function

omfit_classes.omfit_omas.load_omas_nc(filename, consistency_check=True, imas_version='3.41.0', cls=<class 'omas.omas_core.ODS'>)[source]

Load ODS or ODC from NetCDF file

Parameters
  • filename – filename to load from

  • consistency_check – verify that data is consistent with IMAS schema

  • imas_version – imas version to use for consistency check

  • cls – class to use for loading the data

Returns

OMAS data set

omfit_classes.omfit_omas.through_omas_nc(ods, method='class_method')[source]

Test save and load NetCDF

Parameters

ods – ods

Returns

ods

omfit_classes.omfit_omas.save_omas_h5(ods, filename)[source]

Save an ODS to HDF5

Parameters
  • ods – OMAS data set

  • filename – filename or file descriptor to save to

omfit_classes.omfit_omas.load_omas_h5(filename, consistency_check=True, imas_version='3.41.0', cls=<class 'omas.omas_core.ODS'>)[source]

Load ODS or ODC from HDF5

Parameters
  • filename – filename or file descriptor to load from

  • consistency_check – verify that data is consistent with IMAS schema

  • imas_version – imas version to use for consistency check

  • cls – class to use for loading the data

Returns

OMAS data set

omfit_classes.omfit_omas.through_omas_h5(ods, method='class_method')[source]

Test save and load OMAS HDF5

Parameters

ods – ods

Returns

ods

omfit_classes.omfit_omas.save_omas_ascii(ods, filename, machine=None, pulse=None, run=None, dir=None)[source]

Save an ODS to ASCII (follows IMAS ASCII_BACKEND convention)

Parameters
  • ods – OMAS data set

  • filename – filename or file descriptor to save to use None to save IDSs to multiple files based on machine, pulse, run

  • machine – machine name to build filename for saving IDSs to multiple files

  • pulse – pulse number to build filename for saving IDSs to multiple files

  • run – run number to build filename for saving IDSs to multiple files

  • dir – directory where to save multiple IDSs files

omfit_classes.omfit_omas.load_omas_ascii(filename, machine=None, pulse=None, run=None, dir=None, consistency_check=True, imas_version='3.41.0')[source]

Load an ODS from ASCII (follows IMAS ASCII_BACKEND convention)

Parameters
  • filename – filename or file descriptor to load from use None to load IDSs from multiple files based on machine, pulse, run

  • machine – machine name to build filename for loading IDSs from multiple files

  • pulse – pulse number to build filename for loading IDSs from multiple files

  • run – run number to build filename for loading IDSs from multiple files

  • dir – directory from where to load multiple IDSs files

  • consistency_check – verify that data is consistent with IMAS schema

  • imas_version – imas version to use for consistency check

Returns

OMAS data set

omfit_classes.omfit_omas.through_omas_ascii(ods, method='class_method', one_or_many_files='many')[source]

Test save and load OMAS ASCII

Parameters

ods – ods

Returns

ods

omfit_classes.omfit_omas.save_omas_ds(ods, filename)[source]

Save an ODS to xarray dataset

Parameters
  • ods – OMAS data set

  • filename – filename or file descriptor to save to

omfit_classes.omfit_omas.load_omas_ds(filename, consistency_check=True)[source]

Load ODS from xarray dataset

Parameters
  • filename – filename or file descriptor to load from

  • consistency_check – verify that data is consistent with IMAS schema

Returns

OMAS data set

omfit_classes.omfit_omas.through_omas_ds(ods, method='class_method')[source]

Test save and load ODS via xarray file format

Parameters

ods – OMAS data set

Returns

OMAS data set

omfit_classes.omfit_omas.load_omas_dx(filename, consistency_check=True)[source]

Load ODX from xarray dataset

Parameters
  • filename – filename or file descriptor to load from

  • consistency_check – verify that data is consistent with IMAS schema

Returns

OMAS data xarray

omfit_classes.omfit_omas.save_omas_dx(odx, filename)[source]

Save an ODX to xarray dataset

Parameters
  • odx – OMAS data xarray

  • filename – filename or file descriptor to save to

omfit_classes.omfit_omas.through_omas_dx(odx, method='class_method')[source]

Test save and load OMAS data xarray via xarray file format

Parameters

ods – OMAS data xarray

Returns

OMAS data xarray

omfit_classes.omfit_omas.ods_2_odx(ods, homogeneous=None)[source]

Map ODS to an ODX

Parameters
  • ods – OMAS data set

  • homogeneous

    • False: flat representation of the ODS

      (data is not collected across arrays of structures)

    • ’time’: collect arrays of structures only along the time dimension

      (always valid for homogeneous_time=True)

    • ’full’: collect arrays of structures along all dimensions
      (may be valid in many situations, especially related to

      simulation data with homogeneous_time=True and where for example number of ions, sources, etc. do not vary)

    • None: smart setting, uses homogeneous=’time’ if homogeneous_time=True else False

Returns

OMAS data xarray

omfit_classes.omfit_omas.odx_2_ods(odx, consistency_check=True)[source]

Map ODX to ODS

Parameters
  • odx – OMAS data xarray

  • consistency_check – verify that data is consistent with IMAS schema

Returns

OMAS data set

omfit_classes.omfit_omas.save_omas_imas(ods, *args, **kwargs)
omfit_classes.omfit_omas.load_omas_imas(user='fusionbot', machine=None, pulse=None, run=0, occurrence={}, paths=None, time=None, imas_version=None, skip_uncertainties=False, consistency_check=True, verbose=True, backend='MDSPLUS')[source]

Load OMAS data from IMAS

Parameters
  • user – IMAS username

  • machine – IMAS machine

  • pulse – IMAS pulse

  • run – IMAS run

  • occurrence – dictinonary with the occurrence to load for each IDS

  • paths – list of paths to load from IMAS

  • time – time slice [expressed in seconds] to be extracted

  • imas_version – IMAS version (force specific version)

  • skip_uncertainties – do not load uncertain data

  • consistency_check – perform consistency_check

  • verbose – print loading progress

  • backend – Which backend to use, can be one of MDSPLUS, ASCII, HDF5, MEMORY, UDA, NO

Returns

OMAS data set

omfit_classes.omfit_omas.through_omas_imas(ods, method='class_method')[source]

Test save and load OMAS IMAS

Parameters

ods – ods

Returns

ods

omfit_classes.omfit_omas.load_omas_iter_scenario(pulse, run=0, paths=None, imas_version='3.41.0', verbose=True)[source]

Load ODS from ITER IMAS scenario database

Parameters
  • pulse – IMAS pulse

  • run – IMAS run

  • paths – list of paths to load from IMAS

  • imas_version – IMAS version

  • verbose – print loading progress

Returns

OMAS data set

omfit_classes.omfit_omas.browse_imas(user='fusionbot', pretty=True, quiet=False, user_imasdbdir='/home/fusionbot/public/imasdb')[source]

Browse available IMAS data (machine/pulse/run) for given user

Parameters
  • user – user (of list of users) to browse. Browses all users if None.

  • pretty – express size in MB and time in human readeable format

  • quiet – print database to screen

  • user_imasdbdir – directory where imasdb is located for current user (typically $HOME/public/imasdb/)

Returns

hierarchical dictionary with database of available IMAS data (machine/pulse/run) for given user

omfit_classes.omfit_omas.save_omas_s3(ods, filename, user='fusionbot', tmp_dir='/tmp/fusionbot/OMAS_TMP_DIR', **kw)[source]

Save an OMAS object to pickle and upload it to S3

Parameters
  • ods – OMAS data set

  • filename – filename to save to

  • user – username where to look for the file

  • tmp_dir – temporary folder for storing S3 file on local workstation

  • kw – arguments passed to the save_omas_pkl function

omfit_classes.omfit_omas.load_omas_s3(filename, user='fusionbot', consistency_check=None, imas_version=None, tmp_dir='/tmp/fusionbot/OMAS_TMP_DIR')[source]

Download an OMAS object from S3 and read it as pickle

Parameters
  • filename – filename to load from

  • user – username where to look for the file

  • consistency_check – verify that data is consistent with IMAS schema (skip if None)

  • imas_version – imas version to use for consistency check (leave original if None)

  • tmp_dir – temporary folder for storing S3 file on local workstation

Returns

OMAS data set

omfit_classes.omfit_omas.through_omas_s3(ods, method='class_method')[source]

Test save and load S3

Parameters

ods – ods

Returns

ods

omfit_classes.omfit_omas.list_omas_s3(user='')[source]

List S3 content

Parameters

user – username where to look for the file

Returns

OMAS data set

omfit_classes.omfit_omas.del_omas_s3(filename, user='fusionbot')[source]

Delete an OMAS object from S3

Parameters

user – username where to look for the file

Returns

OMAS data set

omfit_classes.omfit_omas.machines(machine=None, branch='')[source]

Function to get machines that have their mappings defined This function takes care of remote transfer the needed files (both .json and .py) if a remote branch is requested

Parameters
  • machine – string with machine name or None

  • branch – GitHub branch from which to load the machine mapping information

Returns

if machine==None returns dictionary with list of machines and their json mapping files if machine is a string, then returns json mapping filename

omfit_classes.omfit_omas.load_omas_machine(machine, pulse, options={}, consistency_check=True, imas_version='3.41.0', cls=<class 'omas.omas_core.ODS'>, branch='', user_machine_mappings=None)[source]
omfit_classes.omfit_omas.machine_mapping_function(__regression_arguments__, **regression_args)[source]

Decorator used to identify machine mapping functions

Parameters

**regression_args – arguments used to run regression test

NOTE: use inspect.unwrap(function) to call a function decorated with @machine_mapping_function

from another function decorated with @machine_mapping_function

omfit_classes.omfit_omas.test_machine_mapping_functions(machine, __all__, global_namespace, local_namespace)[source]

Function used to test python mapping functions

Parameters
  • __all__ – list of functionss to test

  • namespace – testing namespace

class omfit_classes.omfit_omas.mdstree(server, treename, pulse)[source]

Bases: dict

Class to handle the structure of an MDS+ tree. Nodes in this tree are mdsvalue objects

class omfit_classes.omfit_omas.mdsvalue(server, treename, pulse, TDI, old_MDS_server=False)[source]

Bases: dict

Execute MDS+ TDI functions

data()[source]
dim_of(dim)[source]
units()[source]
error()[source]
error_dim_of(dim)[source]
units_dim_of(dim)[source]
size(dim)[source]
raw(TDI=None)[source]

Fetch data from MDS+ with connection caching

Parameters

TDI – string, list or dict of strings MDS+ TDI expression(s) (overrides the one passed when the object was instantiated)

Returns

result of TDI expression, or dictionary with results of TDI expressions

omfit_classes.omfit_omas.omas_info(structures=None, hide_obsolescent=True, cumulative_queries=False, imas_version='3.41.0')[source]

This function returns an ods with the leaf nodes filled with their property informations

Parameters
  • hide_obsolescent – hide obsolescent entries

  • structures – list with ids names or string with ids name of which to retrieve the info if None, then all structures are returned

  • cumulative_queries – return all IDSs that have been queried

  • imas_version – IMAS version to look up

Returns

ods showcasing IDS structure

omfit_classes.omfit_omas.omas_info_node(key, imas_version='3.41.0')[source]

return information about a given node

Parameters
  • key – IMAS path

  • imas_version – IMAS version to look up

Returns

dictionary with IMAS information (or an empty dictionary if the node is not found)

omfit_classes.omfit_omas.get_actor_io_ids(filename)[source]

Parse IMAS Python actor script and return actor input and output IDSs

Parameters

filename – filename of the IMAS Python actor

Returns

tuple with list of input IDSs and output IDSs

omfit_classes.omfit_omas.rcparams_environment(**kw)[source]
omfit_classes.omfit_omas.omas_testdir(filename_topic='')[source]

Return path to temporary folder where OMAS TEST file are saved/loaded

NOTE: If directory does not exists it is created

Returns

string with path to OMAS TEST folder

exception omfit_classes.omfit_omas.OmasDynamicException[source]

Bases: RuntimeError

Exception raised when dynamic data fetching fails

omfit_classes.omfit_omas.probe_endpoints(r0, z0, a0, l0, cocos)[source]

Transform r,z,a,l arrays commonly used to describe poloidal magnetic probes geometry to actual r,z coordinates of the end-points of the probes. This is useful for plotting purposes.

Parameters
  • r0 – r coordinates [m]

  • z0 – Z coordinates [m]

  • a0 – poloidal angles [radiants]

  • l0 – lenght [m]

  • cocos – cocos convention

Returns

list of 2-points r and z coordinates of individual probes

omfit_classes.omfit_omas.transform_current(rho, JtoR=None, JparB=None, equilibrium=None, includes_bootstrap=False)[source]

Given <Jt/R> returns <J.B>, or vice versa Transformation obeys <J.B> = (1/f)*(<B^2>/<1/R^2>)*(<Jt/R> + dp/dpsi*(1 - f^2*<1/R^2>/<B^2>)) N.B. input current must be in the same COCOS as equilibrium.cocosio

Parameters
  • rho – normalized rho grid for input JtoR or JparB

  • JtoR – input <Jt/R> profile (cannot be set along with JparB)

  • JparB – input <J.B> profile (cannot be set along with JtoR)

  • equilibrium – equilibrium.time_slice[:] ODS containing quanities needed for transformation

  • includes_bootstrap – set to True if input current includes bootstrap

Returns

<Jt/R> if JparB set or <J.B> if JtoR set

Example: given total <Jt/R> on rho grid with an existing ods, return <J.B>
JparB = transform_current(rho, JtoR=JtoR,

equilibrium=ods[‘equilibrium’][‘time_slice’][0], includes_bootstrap=True)

omfit_classes.omfit_omas.search_ion(ion_ods, label=None, Z=None, A=None, no_matches_raise_error=True, multiple_matches_raise_error=True)[source]

utility function used to identify the ion number and element numbers given the ion label and or their Z and/or A

Parameters
  • ion_ods – ODS location that ends with .ion

  • label – ion label

  • Z – ion element charge

  • A – ion element mass

Parame no_matches_raise_error

whether to raise a IndexError when no ion matches are found

Parame multiple_matches_raise_error

whether to raise a IndexError when multiple ion matches are found

Returns

dictionary with matching ions labels, each with list of matching ion elements

omfit_classes.omfit_omas.search_in_array_structure(ods, conditions, no_matches_return=0, no_matches_raise_error=False, multiple_matches_raise_error=True)[source]

search for the index in an array structure that matches some conditions

Parameters
  • ods – ODS location that is an array of structures

  • conditions – dictionary (or ODS) whith entries that must match and their values * condition[‘name’]=value : check value * condition[‘name’]=True : check existance * condition[‘name’]=False : check not existance NOTE: True/False as flags for (not)existance is not an issue since IMAS does not support booleans

  • no_matches_return – what index to return if no matches are found

  • no_matches_raise_error – wheter to raise an error in no matches are found

  • multiple_matches_raise_error – whater to raise an error if multiple matches are found

Returns

list with indeces matching conditions

omfit_classes.omfit_omas.get_plot_scale_and_unit(phys_quant, species=None)[source]

Returns normalizing scale for a physical quantity. E.g. “temprerature” returns 1.e-3 and keV :param phys_qaunt: str with a physical quantity. Uses IMAS scheme names where possible :return: scale, unit

omfit_classes.omfit_omas.define_cocos(cocos_ind)[source]

Returns dictionary with COCOS coefficients given a COCOS index

https://docs.google.com/document/d/1-efimTbI55SjxL_yE_GKSmV4GEvdzai7mAj5UYLLUXw/edit

Parameters

cocos_ind – COCOS index

Returns

dictionary with COCOS coefficients

omfit_classes.omfit_omas.cocos_transform(cocosin_index, cocosout_index)[source]

Returns a dictionary with coefficients for how various quantities should get multiplied in order to go from cocosin_index to cocosout_index

https://docs.google.com/document/d/1-efimTbI55SjxL_yE_GKSmV4GEvdzai7mAj5UYLLUXw/edit

Parameters
  • cocosin_index – COCOS index in

  • cocosout_index – COCOS index out

Returns

dictionary with transformation multipliers

omfit_classes.omfit_omas.identify_cocos(B0, Ip, q, psi, clockwise_phi=None, a=None)[source]

Utility function to identify COCOS coordinate system If multiple COCOS are possible, then all are returned.

Parameters
  • B0 – toroidal magnetic field (with sign)

  • Ip – plasma current (with sign)

  • q – safety factor profile (with sign) as function of psi

  • psi – poloidal flux as function of psi(with sign)

  • clockwise_phi – (optional) [True, False] if phi angle is defined clockwise or not This is required to identify odd Vs even COCOS Note that this cannot be determined from the output of a code. An easy way to determine this is to answer the question: is positive B0 clockwise?

  • a – (optional) flux surfaces minor radius as function of psi This is required to identify 2*pi term in psi definition

Returns

list with possible COCOS

omfit_classes.omfit_omas.omas_environment(ods, cocosio=None, coordsio=None, unitsio=None, uncertainio=None, input_data_process_functions=None, xmlcodeparams=False, dynamic_path_creation=None, **kw)[source]

Provides environment for data input/output to/from OMAS

Parameters
  • ods – ODS on which to operate

  • cocosio – COCOS convention

  • coordsio – dictionary/ODS with coordinates for data interpolation

  • unitsio – True/False whether data read from OMAS should have units

  • uncertainio – True/False whether data read from OMAS should have uncertainties

  • input_data_process_functions – list of functions that are used to process data that is passed to the ODS

  • xmlcodeparams – view code.parameters as an XML string while in this environment

  • dynamic_path_creation – whether to dynamically create the path when setting an item * False: raise an error when trying to access a structure element that does not exists * True (default): arrays of structures can be incrementally extended by accessing at the next element in the array * ‘dynamic_array_structures’: arrays of structures can be dynamically extended

  • kw – extra keywords set attributes of the ods (eg. ‘consistency_check’, ‘dynamic_path_creation’, ‘imas_version’)

Returns

ODS with environment set

omfit_classes.omfit_omas.save_omas_imas_remote(serverPicker, ods, **kw)[source]

Save OMAS data set to a remote IMAS server

Parameters
  • serverPicker – remote server name where to connect to

  • **kw – all other parameters are passed to the save_omas_imas function

omfit_classes.omfit_omas.load_omas_imas_remote(serverPicker, **kw)[source]

Load OMAS data set from a remote IMAS server

Parameters
  • serverPicker – remote server name where to connect to

  • **kw

    all other parameters are passed to the load_omas_imas function

    Load OMAS data from IMAS

    param user

    IMAS username

    param machine

    IMAS machine

    param pulse

    IMAS pulse

    param run

    IMAS run

    param occurrence

    dictinonary with the occurrence to load for each IDS

    param paths

    list of paths to load from IMAS

    param time

    time slice [expressed in seconds] to be extracted

    param imas_version

    IMAS version (force specific version)

    param skip_uncertainties

    do not load uncertain data

    param consistency_check

    perform consistency_check

    param verbose

    print loading progress

    param backend

    Which backend to use, can be one of MDSPLUS, ASCII, HDF5, MEMORY, UDA, NO

    return

    OMAS data set

    Load OMAS data from IMAS

    param user

    IMAS username

    param machine

    IMAS machine

    param pulse

    IMAS pulse

    param run

    IMAS run

    param occurrence

    dictinonary with the occurrence to load for each IDS

    param paths

    list of paths to load from IMAS

    param time

    time slice [expressed in seconds] to be extracted

    param imas_version

    IMAS version (force specific version)

    param skip_uncertainties

    do not load uncertain data

    param consistency_check

    perform consistency_check

    param verbose

    print loading progress

    param backend

    Which backend to use, can be one of MDSPLUS, ASCII, HDF5, MEMORY, UDA, NO

    return

    OMAS data set

omfit_classes.omfit_omas.load_omas_uda_remote(serverPicker, **kw)[source]

Load OMAS data set from a remote UDA server

Parameters
  • serverPicker – remote server name where to connect to

  • **kw – all other parameters are passed to the load_omas_uda function

omfit_classes.omfit_omas.browse_imas_remote(serverPicker, **kw)[source]

Browse available IMAS data (machine/pulse/run) for given user on remote IMAS server

Parameters
  • serverPicker – remote server name where to connect to

  • **kw – all other parameters are passed to the browse_imas function

omfit_classes.omfit_omas.iter_scenario_summary_remote(quiet=False, environment='module purge\nmodule load IMAS')[source]

Access iter server and sun scenario_summary command

Parameters
  • quiet – print output of scenario_summary to screen or not

  • environmentmodule load {IMAS_module} or the like

Returns

dictionary with info from available scenarios

omfit_classes.omfit_omas.load_omas_iter_scenario_remote(**kw)[source]

Load OMAS iter scenario from a remote ITER IMAS server

Parameters

**kw

all other parameters are passed to the load_omas_iter_scenario function

Load ODS from ITER IMAS scenario database

param pulse

IMAS pulse

param run

IMAS run

param paths

list of paths to load from IMAS

param imas_version

IMAS version

param verbose

print loading progress

return

OMAS data set

class omfit_classes.omfit_omas.OMFITiterscenario(*args, **kwargs)[source]

Bases: omfit_classes.omfit_json.OMFITjson

OMFIT class to parse json files

Parameters
  • filename – filename of the json file

  • use_leading_comma – whether commas whould be leading

  • add_top_level_brackets – whether to add opening { and closing } to string read from file

  • **kw – arguments passed to __init__ of OMFITascii

filter(conditions, squash=False)[source]

Filter database for certain conditions

Parameters
  • conditions – dictionary with conditions for returning a match. For example: {‘List of IDSs’:[‘equilibrium’,’core_profiles’,’core_sources’,’summary’], ‘Workflow’:’CORSICA’, ‘Fuelling’:’D-T’}

  • squash – remove attributes that are equal among all entries

Returns

OMFITiterscenario dictionary only with matching entries

class omfit_classes.omfit_omas.OMFITods(filename, ods=None, **kw)[source]

Bases: omfit_classes.startup_framework.OMFITobject, omas.omas_core.ODS

Parameters
  • imas_version – IMAS version to use as a constrain for the nodes names

  • consistency_check – whether to enforce consistency with IMAS schema

  • cocos – internal COCOS representation (this can only be set when the object is created)

  • cocosio – COCOS representation of the data that is read/written from/to the ODS

  • coordsio – ODS with coordinates to use for the data that is read/written from/to the ODS

  • unitsio – ODS will return data with units if True

  • uncertainio – ODS will return data with uncertainties if True

  • dynamic – internal keyword used for dynamic data loading

load()[source]

Load OMAS data

Parameters
  • filename – filename.XXX where the extension is used to select load format method (eg. ‘pkl’,’nc’,’h5’,’ds’,’json’,’ids’) set to imas, s3, hdc, mongo for save methods that do not have a filename with extension

  • consistency_check – perform consistency check once the data is loaded

  • *args – extra arguments passed to load_omas_XXX() method

  • **kw – extra keywords passed to load_omas_XXX() method

Returns

ODS with loaded data

save()[source]

The save method is supposed to be overridden by classes which use OMFITobject as a superclass. If left as it is this method can detect if .filename was changed and if so, makes a copy from the original .filename (saved in the .link attribute) to the new .filename

clear(*args, **kw)

remove data from a branch

Returns

current ODS object

close(*args, **kw)

None

codeparams2dict(*args, **kw)

Convert code.parameters to a CodeParameters dictionary object

codeparams2xml(*args, **kw)

Convert code.parameters to a XML string

coordinates(*args, **kw)

return dictionary with coordinates of a given ODS location

Parameters

key – ODS location to return the coordinates of Note: both the key location and coordinates must have data

Returns

OrderedDict with coordinates of a given ODS location

copy(*args, **kw)
Returns

copy.deepcopy of current ODS object

copy_attrs_from(*args, **kw)

copy omas_ods_attrs attributes from input ods

Parameters

ods – input ods

Returns

self

dataset(*args, **kw)

Return xarray.Dataset representation of a whole ODS

Forming the N-D labeled arrays (tensors) that are at the base of xarrays, requires that the number of elements in the arrays do not change across the arrays of data structures.

Parameters

homogeneous

  • False: flat representation of the ODS

    (data is not collected across arrays of structures)

  • ’time’: collect arrays of structures only along the time dimension

    (always valid for homogeneous_time=True)

  • ’full’: collect arrays of structures along all dimensions
    (may be valid in many situations, especially related to

    simulation data with homogeneous_time=True and where for example number of ions, sources, etc. do not vary)

  • None: smart setting, uses homogeneous=’time’ if homogeneous_time=True else False

Returns

xarray.Dataset

diff(*args, **kw)

return differences between this ODS and the one passed

Parameters
  • ods – ODS to compare against

  • ignore_type – ignore object type differences

  • ignore_empty – ignore emptry nodes

  • ignore_keys – ignore the following keys

  • ignore_default_keys

    ignores the following keys from the comparison

    dataset_description.data_entry.user

    dataset_description.data_entry.run dataset_description.data_entry.machine dataset_description.ids_properties dataset_description.imas_version dataset_description.time ids_properties.homogeneous_time ids_properties.occurrence ids_properties.version_put.data_dictionary ids_properties.version_put.access_layer ids_properties.version_put.access_layer_language

rtol : The relative tolerance parameter

atol : The absolute tolerance parameter

Returns

dictionary with differences

diff_attrs(*args, **kw)

Checks if two ODSs have any difference in their attributes

Parameters
  • ods – ODS to compare against

  • attrs – list of attributes to compare

  • verbose – print differences to stdout

Returns

dictionary with list of attriibutes that have differences, or False otherwise

document(*args, **kw)

RST documentation of the ODs content

Parameters

what – fields to be included in the documentation if None, all fields are included

Returns

string with RST documentation

flat(*args, **kw)

Flat dictionary representation of the data

Parameters

**kw – extra keywords passed to the path() method

Returns

OrderedDict with flat representation of the data

from_structure(*args, **kw)

Generate an ODS starting from a hierarchical structure made of dictionaries and lists

Parameters

structure – input structure

Returns

self

full_paths(*args, **kw)

Traverse the ods and return paths from root of ODS that have data

Parameters

**kw – extra keywords passed to the path() method

Returns

list of paths that have data

full_pretty_paths(*args, **kw)

Traverse the ods and return paths from root of ODS that have data formatted nicely

Parameters

**kw – extra keywords passed to the full_paths() method

Returns

list of paths that have data formatted nicely

func = 'xarray'
get(*args, **kw)

Check if key is present and if not return default value without creating value in omas data structure

Parameters
  • key – ods location

  • default – default value

Returns

return default if key is not found

getraw(*args, **kw)

Method to access data stored in ODS with no processing of the key, and it is thus faster than the ODS.__getitem__(key) Effectively behaves like a pure Python dictionary/list __getitem__. This method is mostly meant to be used in the inner workings of the ODS class. NOTE: ODS.__getitem__(key, False) can be used to access items in the ODS with disabled cocos and coordinates processing but with support for different syntaxes to access data

Parameters

key – string or integer

Returns

ODS value

homogeneous_time(*args, **kw)

Dynamically evaluate whether time is homogeneous or not NOTE: this method does not read ods[‘ids_properties.homogeneous_time’] instead it uses the time info to figure it out

Parameters

default – what to return in case no time basis is defined

Returns

True/False or default value (True) if no time basis is defined

info(*args, **kw)

return node info

Parameters

location – location of the node to return info of

Returns

dictionary with info

items(*args, **kw)

None

keys(*args, **kw)

Return list of keys

Parameters

dynamic – whether dynamic loaded key should be shown. This is True by default because this should be the case for calls that are facing the user. Within the inner workings of OMAS we thus need to be careful and keep track of when this should not be the case. Throughout the library we use dynamic=1 or dynamic=0 for debug purposes, since one can place a conditional breakpoint in this function checking if dynamic is True and self.dynamic to verfy that indeed the dynamic=True calls come from the user and not from within the library itself.

Returns

list of keys

list_coordinates(*args, **kw)

return dictionary with coordinates in a given ODS

Parameters

absolute_location – return keys as absolute or relative locations

Returns

dictionary with coordinates

open(*args, **kw)

Dynamically load OMAS data for seekable storage formats

Parameters
  • filename – filename.XXX where the extension is used to select load format method (eg. ‘nc’,’h5’,’ds’,’json’,’ids’) set to imas, s3, hdc, mongo for save methods that do not have a filename with extension

  • consistency_check – perform consistency check once the data is loaded

  • *args – extra arguments passed to dynamic_omas_XXX() method

  • **kw – extra keywords passed to dynamic_omas_XXX() method

Returns

ODS with loaded data

paths(*args, **kw)

Traverse the ods and return paths to its leaves

Parameters
  • return_empty_leaves – if False only return paths to leaves that have data if True also return paths to empty leaves

  • traverse_code_parameters – traverse code parameters

  • include_structures – include paths leading to the leaves

  • dynamic – traverse paths that are not loaded in a dynamic ODS

Returns

list of paths that have data

physics_add_phi_to_equilbrium_profiles_1d_ods(*args, **kw)

Adds profiles_1d.phi to an ODS using q :param ods: input ods

Parameters

time_index – time slices to process

physics_add_rho_pol_norm_to_equilbrium_profiles_1d_ods(*args, **kw)

None

physics_check_iter_scenario_requirements(*args, **kw)

Check that the current ODS satisfies the ITER scenario database requirements as defined in https://confluence.iter.org/x/kQqOE

Returns

list of elements that are missing to satisfy the ITER scenario requirements

physics_consistent_times(*args, **kw)

Assign .time and .ids_properties.homogeneous_time info for top-level structures since these are required for writing an IDS to IMAS

Parameters
  • attempt_fix – fix dataset_description and wall IDS to have 0 times if none is set

  • raise_errors – raise errors if could not satisfy IMAS requirements

Returns

True if all is good, False if requirements are not satisfied, None if fixes were applied

physics_core_profiles_consistent(*args, **kw)
Calls all core_profiles consistency functions including
  • core_profiles_densities

  • core_profiles_pressures

  • core_profiles_zeff

Parameters
  • ods – input ods

  • update – operate in place

  • use_electrons_density – denominator is core_profiles.profiles_1d.:.electrons.density instead of sum Z*n_i in Z_eff calculation

  • enforce_quasineutrality – update electron density to be quasineutral with ions

Returns

updated ods

physics_core_profiles_currents(*args, **kw)

This function sets currents in ods[‘core_profiles’][‘profiles_1d’][time_index]

If provided currents are inconsistent with each other or ods, ods is not updated and an error is thrown.

Updates integrated currents in ods[‘core_profiles’][‘global_quantities’] (N.B.: equilibrium IDS is required for evaluating j_tor and integrated currents)

Parameters
  • ods – ODS to update in-place

  • time_index – ODS time index to updated

if None, all times are updated

Parameters

rho_tor_norm – normalized rho grid upon which each j is given

For each j:
  • ndarray: set in ods if consistent

  • ‘default’: use value in ods if present, else set to None

  • None: try to calculate from currents; delete from ods if you can’t

Parameters
  • j_actuator – Non-inductive, non-bootstrap current <J.B>/B0 N.B.: used for calculating other currents and consistency, but not set in ods

  • j_bootstrap – Bootstrap component of <J.B>/B0

  • j_ohmic – Ohmic component of <J.B>/B0

  • j_non_inductive – Non-inductive component of <J.B>/B0 Consistency requires j_non_inductive = j_actuator + j_bootstrap, either as explicitly provided or as computed from other components.

  • j_total – Total <J.B>/B0 Consistency requires j_total = j_ohmic + j_non_inductive either as explicitly provided or as computed from other components.

physics_core_profiles_densities(*args, **kw)

Density, density_thermal, and density_fast for electrons and ions are filled and are self-consistent

Parameters
  • ods – input ods

  • update – operate in place

  • enforce_quasineutrality – update electron density to be quasineutral with ions

Returns

updated ods

physics_core_profiles_pressures(*args, **kw)

Calculates individual ions pressures

core_profiles.profiles_1d.:.ion.:.pressure_thermal #Pressure (thermal) associated with random motion ~average((v-average(v))^2) core_profiles.profiles_1d.:.ion.:.pressure #Pressure (thermal+non-thermal)

as well as total pressures

core_profiles.profiles_1d.:.pressure_thermal #Thermal pressure (electrons+ions) core_profiles.profiles_1d.:.pressure_ion_total #Total (sum over ion species) thermal ion pressure core_profiles.profiles_1d.:.pressure_perpendicular #Total perpendicular pressure (electrons+ions, thermal+non-thermal) core_profiles.profiles_1d.:.pressure_parallel #Total parallel pressure (electrons+ions, thermal+non-thermal)

NOTE: the fast particles ion pressures are read, not set by this function:

core_profiles.profiles_1d.:.ion.:.pressure_fast_parallel #Pressure (thermal) associated with random motion ~average((v-average(v))^2) core_profiles.profiles_1d.:.ion.:.pressure_fast_perpendicular #Pressure (thermal+non-thermal)

Parameters
  • ods – input ods

  • update – operate in place

Returns

updated ods

physics_core_profiles_zeff(*args, **kw)

calculates effective charge

Parameters
  • ods – input ods

  • update – operate in place

  • use_electrons_density – denominator core_profiles.profiles_1d.:.electrons.density instead of sum Z*n_i

  • enforce_quasineutrality – update electron density to be quasineutral with ions

Returns

updated ods

physics_core_sources_j_parallel_sum(*args, **kw)

ods function used to sum all j_parallel contributions from core_sources (j_actuator)

Parameters
  • ods – input ods

  • time_index – time slice to process

Returns

sum of j_parallel in [A/m^2]

physics_current_from_eq(*args, **kw)

This function sets the currents in ods[‘core_profiles’][‘profiles_1d’][time_index] using ods[‘equilibrium’][‘time_slice’][time_index][‘profiles_1d’][‘j_tor’]

Parameters
  • ods – ODS to update in-place

  • time_index – ODS time index to updated

if None, all times are updated

physics_derive_equilibrium_profiles_2d_quantity(*args, **kw)

This function derives values of empty fields in prpfiles_2d from other parameters in the equilibrium ods Currently only the magnetic field components are supported

Parameters
  • ods – input ods

  • time_index – time slice to process

  • grid_index – Index of grid to map

  • quantity – Member of profiles_2d to be derived

Returns

updated ods

physics_equilibrium_consistent(*args, **kw)

Calculate missing derived quantities for equilibrium IDS

Parameters

ods – ODS to update in-place

Returns

updated ods

physics_equilibrium_form_constraints(*args, **kw)

generate equilibrium constraints from experimental data in ODS

Parameters
  • ods – input ODS

  • times – list of times at which to generate the constraints

  • default_average – default averaging time

  • constraints

    list of constraints to be formed (if experimental data is available) NOTE: only the constraints marked with OK are supported at this time:

    OK b_field_tor_vacuum_r
    OK bpol_probe
    OK diamagnetic_flux
     * faraday_angle
    OK flux_loop
    OK ip
     * iron_core_segment
     * mse_polarisation_angle
     * n_e
     * n_e_line
    OK pf_current
     * pf_passive_current
     * pressure
     * q
     * strike_point
     * x_point
    

  • averages – dictionary with average times for individual constraints Smoothed using Gaussian, sigma=averages/4. and the convolution is integrated across +/-4.*sigma.

  • cutoff_hz – a list of two elements with low and high cutoff frequencies [lowFreq, highFreq]

  • rm_integr_drift_after – time in ms after which is assumed thet all currents are zero and signal should be equal to zero. Used for removing of the integrators drift

  • update – operate in place

Returns

updated ods

physics_equilibrium_ggd_to_rectangular(*args, **kw)

Convert GGD data to profiles 2D

Parameters
  • ods – input ods

  • time_index – time slices to process

  • resolution – integer or tuple for rectangular grid resolution

  • method – one of ‘nearest’, ‘linear’, ‘cubic’, ‘extrapolate’

  • update – operate in place

Returns

updated ods

physics_equilibrium_profiles_2d_map(*args, **kw)

This routines creates interpolators for quantities and stores them in the cache for future use. It can also be used to just return the current profile_2d quantity by omitting dim1 and dim2. At the moment this routine always extrapolates for data outside the defined grid range.

Parameters
  • ods – input ods

  • time_index – time slices to process

  • grid_index – Index of grid to map

  • quantity – Member of profiles_2d[:] to map

  • dim1 – First coordinate of the points to map to

  • dim2 – Second coordinate of the points to map to

  • cache – Cache to store interpolants in

  • return_cache – Toggles return of cache

Returns

mapped positions (and cahce if return_cache)

physics_equilibrium_stored_energy(*args, **kw)

Calculate MHD stored energy from equilibrium pressure and volume

Parameters
  • ods – input ods

  • update – operate in place

Returns

updated ods

physics_equilibrium_transpose_RZ(*args, **kw)

Transpose 2D grid values for RZ grids under equilibrium.time_slice.:.profiles_2d.:.

Parameters
  • ods – ODS to update in-place

  • flip_dims – whether to switch the equilibrium.time_slice.:.profiles_2d.:.grid.dim1 and dim1

Returns

updated ods

physics_imas_info(*args, **kw)

add ids_properties.version_put… information

Returns

updated ods

physics_magnetics_sanitize(*args, **kw)

Take data in legacy magnetics.bpol_probe and store it in current magnetics.b_field_pol_probe and magnetics.b_field_tor_probe

Parameters

ods – ODS to update in-place

Returns

updated ods

physics_remap_flux_coordinates(*args, **kw)

Maps from one magnetic coordinate system to another. At the moment only supports psi <-> rho_pol :param ods: input ods

Parameters
  • time_index – time slices to process

  • origin – Specifier for original coordinate system

  • destination – Target coordinate system for output

  • values – Values to transform

Returns

Transformed values

physics_resolve_equilibrium_profiles_2d_grid_index(*args, **kw)

Convenience function to identify which of profiles_2d[:].grid_type.index matches the specified grid_identifier

Parameters
  • ods – input ods

  • time_index – time index to search

  • grid_identifier – grid type to be resolved

Returns

Index of grid the requested grid, not to be confused with profiles_2d[:].grid_type.index

physics_summary_consistent_global_quantities(*args, **kw)

Generate summary.global_quantities from global_quantities of other IDSs

Parameters
  • ods – input ods

  • ds – IDS from which to update summary.global_quantities. All IDSs if None.

  • update – operate in place

Returns

updated ods

physics_summary_currents(*args, **kw)

Calculatess plasma currents from core_profiles for each time slice and stores them in the summary ods

Parameters
  • ods – input ods

  • time_index – time slices to process

  • update – operate in place

Returns

updated ods

physics_summary_global_quantities(*args, **kw)
Calculates global quantities for each time slice and stores them in the summary ods:
  • Greenwald Fraction

  • Energy confinement time estimated from the IPB98(y,2) scaling

  • Integrate power densities to the totals

  • Generate summary.global_quantities from global_quantities of other IDSs

Parameters
  • ods – input ods

  • update – operate in place

Returns

updated ods

physics_summary_greenwald(*args, **kw)

Calculates Greenwald Fraction for each time slice and stores them in the summary ods.

Parameters
  • ods – input ods

  • update – operate in place

Returns

updated ods

physics_summary_heating_power(*args, **kw)

Integrate power densities to the total and heating and current drive systems and fills summary.global_quantities

Parameters
  • ods – input ods

  • update – operate in place

Returns

updated ods

physics_summary_lineaverage_density(*args, **kw)

Calculates line-average electron density for each time slice and stores them in the summary ods

Parameters
  • ods – input ods

  • line_grid – number of points to calculate line average density over (includes point outside of boundary)

  • time_index – time slices to process

  • update – operate in place

  • doPlot – plots the interferometer lines on top of the equilibrium boundary shape

Returns

updated ods

physics_summary_taue(*args, **kw)

Calculates Energy confinement time estimated from the IPB98(y,2) scaling for each time slice and stores them in the summary ods

Parameters
  • ods – input ods

  • update – operate in place

Thermal

calculates the thermal part of the energy confinement time from core_profiles if True, otherwise use the stored energy MHD from the equilibrium ods

Returns

updated ods

physics_summary_thermal_stored_energy(*args, **kw)

Calculates the stored energy based on the contents of core_profiles for all time-slices

Parameters
  • ods – input ods

  • update – operate in place

Returns

updated ods

physics_wall_add(*args, **kw)

Add wall information to the ODS

Parameters
  • ods – ODS to update in-place

  • machine – machine of which to load the wall (if None it is taken from ods[‘dataset_description.data_entry.machine’])

plot_bolometer_overlay(*args, **kw)

Overlays bolometer chords

Parameters
  • ods – ODS instance

  • ax – axes instance into which to plot (default: gca())

  • reset_fan_color – bool At the start of each bolometer fan (group of channels), set color to None to let a new one be picked by the cycler. This will override manually specified color.

  • colors – list of matplotlib color specifications. Do not use a single RGBA style spec.

  • **kw

    Additional keywords for bolometer plot

    • Accepts standard omas_plot overlay keywords listed in overlay() documentation: mask, labelevery, …

    • Remaining keywords are passed to plot call for drawing lines for the bolometer sightlines

plot_charge_exchange_overlay(*args, **kw)

Overlays Charge Exchange Recombination (CER) spectroscopy channel locations

Parameters
  • ods – OMAS ODS instance

  • ax – axes instance into which to plot (default: gca())

  • which_pos

    string ‘all’: plot all valid positions this channel uses. This can vary in time depending on which beams are on.

    ’closest’: for each channel, pick the time slice with valid data closest to the time used for the

    equilibrium contours and show position at this time. Falls back to all if equilibrium time cannot be read from time_slice 0 of equilibrium in the ODS.

  • **kw

    Additional keywords for CER plot:

    color_tangential: color to use for tangentially-viewing channels

    color_vertical: color to use for vertically-viewing channels

    color_radial: color to use for radially-viewing channels

    marker_tangential, marker_vertical, marker_radial: plot symbols to use for T, V, R viewing channels

    • Accepts standard omas_plot overlay keywords listed in overlay() documentation: mask, labelevery, …

    • Remaining keywords are passed to plot call

plot_core_profiles_currents_summary(*args, **kw)

Plot currents in core_profiles_1d

Parameters
  • ods – input ods

  • fig – figure to plot in (a new figure is generated if fig is None)

  • time_index – int, list of ints, or None time slice to plot. If None all timeslices are plotted.

  • time – float, list of floats, or None time to plot. If None all timeslicess are plotted. if not None, it takes precedence over time_index

plot_core_profiles_pressures(*args, **kw)

Plot pressures in ods[‘core_profiles’][‘profiles_1d’][time_index]

Parameters
  • ods – input ods

  • time_index – int, list of ints, or None time slice to plot. If None all timeslices are plotted.

  • time – float, list of floats, or None time to plot. If None all timeslicess are plotted. if not None, it takes precedence over time_index

  • ax – axes to plot in (active axes is generated if ax is None)

  • kw – arguments passed to matplotlib plot statements

Returns

axes handler

plot_core_profiles_summary(*args, **kw)

Plot densities and temperature profiles for electrons and all ion species as per ods[‘core_profiles’][‘profiles_1d’][time_index]

Parameters
  • ods – input ods

  • fig – figure to plot in (a new figure is generated if fig is None)

  • time_index – int, list of ints, or None time slice to plot. If None all timeslices are plotted.

  • time – float, list of floats, or None time to plot. If None all timeslicess are plotted. if not None, it takes precedence over time_index

  • ods_species – list of ion specie indices as listed in the core_profiles ods (electron index = -1) if None selected plot all the ion speciess

  • quantities – list of strings to plot from the profiles_1d ods like zeff, temperature & rotation_frequency_tor_sonic

  • kw – arguments passed to matplotlib plot statements

Returns

figure handler

plot_core_sources_summary(*args, **kw)

Plot sources for electrons and all ion species

Parameters
  • ods – input ods

  • time_index – int, list of ints, or None time slice to plot. If None all timeslices are plotted.

  • time – float, list of floats, or None time to plot. If None all timeslicess are plotted. if not None, it takes precedence over time_index

  • fig – figure to plot in (a new figure is generated if fig is None)

  • kw – arguments passed to matplotlib plot statements

Returns

axes

plot_core_transport_fluxes(*args, **kw)

Plot densities and temperature profiles for all species, rotation profile, TGYRO fluxes and fluxes from power_balance per STEP state.

Parameters
  • ods – input ods

  • time_index – int, list of ints, or None time slice to plot. If None all timeslices are plotted.

  • time – float, list of floats, or None time to plot. If None all timeslicess are plotted. if not None, it takes precedence over time_index

  • fig – figure to plot in (a new figure is generated if fig is None)

  • show_total_density – bool Show total thermal+fast in addition to thermal/fast breakdown if available

  • plot_zeff – if True, plot zeff below the plasma rotation

Kw

matplotlib plot parameters

Returns

axes

plot_ec_launchers_CX(*args, **kw)

Plot EC launchers in poloidal cross-section

Parameters
  • ods – input ods

  • time_index – int, list of ints, or None time slice to plot. If None all timeslices are plotted.

  • time – float, list of floats, or None time to plot. If None all timeslicess are plotted. if not None, it takes precedence over time_index

  • ax – axes to plot in (active axes is generated if ax is None)

  • kw – arguments passed to matplotlib plot statements

  • beam_trajectory – length of launcher on plot

Returns

axes handler

plot_ec_launchers_CX_topview(*args, **kw)

Plot EC launchers in toroidal cross-section

Parameters
  • ods – input ods

  • time_index – int, list of ints, or None time slice to plot. If None all timeslices are plotted.

  • time – float, list of floats, or None time to plot. If None all timeslicess are plotted. if not None, it takes precedence over time_index

  • ax – axes to plot in (active axes is generated if ax is None)

  • kw – arguments passed to matplotlib plot statements

  • beam_trajectory – length of launcher on plot

Returns

axes handler

plot_equilibrium_CX(*args, **kw)

Plot equilibrium cross-section as per ods[‘equilibrium’][‘time_slice’][time_index]

Parameters
  • ods – ODS instance input ods containing equilibrium data

  • time_index – int, list of ints, or None time slice to plot. If None all timeslices are plotted.

  • time – float, list of floats, or None time to plot. If None all timeslicess are plotted. if not None, it takes precedence over time_index

  • levels – sorted numeric iterable values to pass to 2D plot as contour levels

  • contour_quantity – string quantity to contour, anything in eq[‘profiles_1d’] or eq[‘profiles_2d’] or psi_norm

  • allow_fallback – bool If rho/phi is requested but not available, plot on psi instead if allowed. Otherwise, raise ValueError.

  • ax – Axes instance axes to plot in (active axes is generated if ax is None)

  • sf – int Resample scaling factor. For example, set to 3 to resample to 3x higher resolution. Makes contours smoother.

  • label_contours – bool or None True/False: do(n’t) label contours None: only label if contours are of q

  • show_wall – bool Plot the inner wall or limiting surface, if available

  • xkw – dict Keywords to pass to plot call to draw X-point(s). Disable X-points by setting xkw={‘marker’: ‘’}

  • ggd_points_triangles – Caching of ggd data structure as generated by omas_physics.grids_ggd_points_triangles() method

  • **kw – keywords passed to matplotlib plot statements

Returns

Axes instance

plot_equilibrium_CX_topview(*args, **kw)

Plot equilibrium toroidal cross-section as per ods[‘equilibrium’][‘time_slice’][time_index]

Parameters
  • ods – ODS instance input ods containing equilibrium data

  • time_index – int, list of ints, or None time slice to plot. If None all timeslices are plotted.

  • time – float, list of floats, or None time to plot. If None all timeslicess are plotted. if not None, it takes precedence over time_index

  • ax – Axes instance [optional] axes to plot in (active axes is generated if ax is None)

  • **kw – arguments passed to matplotlib plot statements

Returns

Axes instance

plot_equilibrium_quality(*args, **kw)

Plot equilibrium convergence error and total Chi-squared as a function of time

Parameters
  • ods – input ods

  • fig – figure to plot in (a new figure is generated if fig is None)

plot_equilibrium_summary(*args, **kw)

Plot equilibrium cross-section and P, q, P’, FF’ profiles as per ods[‘equilibrium’][‘time_slice’][time_index]

Parameters
  • ods – input ods

  • time_index – int, list of ints, or None time slice to plot. If None all timeslices are plotted.

  • time – float, list of floats, or None time to plot. If None all timeslicess are plotted. if not None, it takes precedence over time_index

  • fig – figure to plot in (a new figure is generated if fig is None)

  • ggd_points_triangles – Caching of ggd data structure as generated by omas_physics.grids_ggd_points_triangles() method

  • kw – arguments passed to matplotlib plot statements

Returns

figure handler

plot_gas_injection_overlay(*args, **kw)

Plots overlays of gas injectors

Parameters
  • ods – OMAS ODS instance

  • ax – axes instance into which to plot (default: gca())

  • angle_not_in_pipe_name – bool Set this to include (Angle) at the end of injector labels. Useful if injector/pipe names don’t already include angles in them.

  • which_gas

    string or list Filter for selecting which gas pipes to display.

    • If string: get a preset group, like ‘all’.

    • If list: only pipes in the list will be shown. Abbreviations are tolerated; e.g. GASA is recognized as GASA_300. One abbreviation can turn on several pipes. There are several injection location names starting with RF_ on DIII-D, for example.

  • show_all_pipes_in_group – bool Some pipes have the same R,Z coordinates of their exit positions (but different phi locations) and will appear at the same location on the plot. If this keyword is True, labels for all the pipes in such a group will be displayed together. If it is False, only the first one in the group will be labeled.

  • simple_labels – bool Simplify labels by removing suffix after the last underscore.

  • label_spacer – int Number of blank lines and spaces to insert between labels and symbol

  • colors – list of matplotlib color specifications. These colors control the display of various gas ports. The list will be repeated to make sure it is long enough. Do not specify a single RGB tuple by itself. However, a single tuple inside list is okay [(0.9, 0, 0, 0.9)]. If the color keyword is used (See **kw), then color will be popped to set the default for colors in case colors is None.

  • draw_arrow – bool or dict Draw an arrow toward the machine at the location of the gas inlet. If dict, pass keywords to arrow drawing func.

  • **kw

    Additional keywords for gas plot:

    • Accepts standard omas_plot overlay keywords listed in overlay() documentation: mask, labelevery, …

    • Remaining keywords are passed to plot call for drawing markers at the gas locations.

plot_interferometer_overlay(*args, **kw)

Plots overlays of interferometer chords.

Parameters
  • ods – OMAS ODS instance

  • ax – axes instance into which to plot (default: gca())

  • **kw

    Additional keywords

    • Accepts standard omas_plot overlay keywords listed in overlay() documentation: mask, labelevery, …

    • Remaining keywords are passed to plot call

plot_langmuir_probes_overlay(*args, **kw)

Overlays Langmuir probe locations

Parameters
  • ods – ODS instance Must contain langmuir_probes with embedded position data

  • ax – Axes instance

  • embedded_probes – list of strings Specify probe names to use. Only the embedded probes listed will be plotted. Set to None to plot all probes. Probe names are like ‘F11’ or ‘P-6’ (the same as appear on the overlay).

  • colors – list of matplotlib color specifications. Do not use a single RGBA style spec.

  • show_embedded – bool Recommended: don’t enable both embedded and reciprocating plots at the same time; make two calls instead. It will be easier to handle mapping of masks, colors, etc.

  • show_reciprocating – bool

  • **kw

    Additional keywords.

    • Accepts standard omas_plot overlay keywords listed in overlay() documentation: mask, labelevery, …

    • Others will be passed to the plot() call for drawing the probes.

plot_lh_antennas_CX(*args, **kw)

Plot LH antenna position in poloidal cross-section

Parameters
  • ods – input ods

  • time_index – int, list of ints, or None time slice to plot. If None all timeslices are plotted.

  • time – float, list of floats, or None time to plot. If None all timeslicess are plotted. if not None, it takes precedence over time_index

  • ax – axes to plot in (active axes is generated if ax is None)

  • antenna_trajectory – length of antenna on plot

  • kw – arguments passed to matplotlib plot statements

Returns

axes handler

plot_lh_antennas_CX_topview(*args, **kw)

Plot LH antenna in toroidal cross-section

Parameters
  • ods – input ods

  • time_index – int, list of ints, or None time slice to plot. If None all timeslices are plotted.

  • time – float, list of floats, or None time to plot. If None all timeslicess are plotted. if not None, it takes precedence over time_index

  • ax – axes to plot in (active axes is generated if ax is None)

  • kw – arguments passed to matplotlib plot statements

  • antenna_trajectory – length of antenna on plot

Returns

axes handler

plot_magnetics_bpol_probe_data(*args, **kw)

plot bpol_probe time traces and equilibrium constraints

Parameters
  • equilibrium_constraints – plot equilibrium constraints if present

  • ax – Axes instance [optional] axes to plot in (active axes is generated if ax is None)

  • **kw – Additional keywords for plot

Returns

axes instance

plot_magnetics_diamagnetic_flux_data(*args, **kw)

plot diamagnetic_flux time trace and equilibrium constraint

Parameters
  • equilibrium_constraints – plot equilibrium constraints if present

  • ax – Axes instance [optional] axes to plot in (active axes is generated if ax is None)

  • **kw – Additional keywords for plot

Returns

axes instance

plot_magnetics_flux_loop_data(*args, **kw)

plot flux_loop time traces and equilibrium constraints

Parameters
  • equilibrium_constraints – plot equilibrium constraints if present

  • ax – Axes instance [optional] axes to plot in (active axes is generated if ax is None)

  • **kw – Additional keywords for plot

Returns

axes instance

plot_magnetics_ip_data(*args, **kw)

plot ip time trace and equilibrium constraint

Parameters
  • equilibrium_constraints – plot equilibrium constraints if present

  • ax – Axes instance [optional] axes to plot in (active axes is generated if ax is None)

  • **kw – Additional keywords for plot

Returns

axes instance

plot_magnetics_overlay(*args, **kw)

Plot magnetics on a tokamak cross section plot

Parameters
  • ods – OMAS ODS instance

  • flux_loop_style – dictionary with matplotlib options to render flux loops

  • pol_probe_style – dictionary with matplotlib options to render poloidal magnetic probes

  • tor_probe_style – dictionary with matplotlib options to render toroidal magnetic probes

  • ax – axes to plot in (active axes is generated if ax is None)

Returns

axes handler

plot_nbi_summary(*args, **kw)

Plot summary of NBI power time traces

Parameters
  • ods – input ods

  • ax – axes to plot in (active axes is generated if ax is None)

Returns

axes handler

plot_overlay(*args, **kw)

Plots overlays of hardware/diagnostic locations on a tokamak cross section plot

Parameters
  • ods – OMAS ODS instance

  • ax – axes instance into which to plot (default: gca())

  • allow_autoscale – bool Certain overlays will be allowed to unlock xlim and ylim, assuming that they have been locked by equilibrium_CX. If this option is disabled, then hardware systems like PF-coils will be off the plot and mostly invisible.

  • debug_all_plots – bool Individual hardware systems are on by default instead of off by default.

  • return_overlay_list – Return list of possible overlays that could be plotted

  • **kw

    additional keywords for selecting plots.

    • Select plots by setting their names to True; e.g.: if you want the gas_injection plot, set gas_injection=True as a keyword. If debug_all_plots is True, then you can turn off individual plots by, for example, set_gas_injection=False.

    • Instead of True to simply turn on an overlay, you can pass a dict of keywords to pass to a particular overlay method, as in thomson={‘labelevery’: 5}. After an overlay pops off its keywords, remaining keywords are passed to plot, so you can set linestyle, color, etc.

    • Overlay functions accept these standard keywords:
      • mask: bool array

        Set of flags for switching plot elements on/off. Must be equal to the number of channels or items to be plotted.

      • labelevery: int

        Sets how often to add labels to the plot. A setting of 0 disables labels, 1 labels every element, 2 labels every other element, 3 labels every third element, etc.

      • notesize: matplotlib font size specification

        Applies to annotations drawn on the plot. Examples: ‘xx-small’, ‘medium’, 16

      • label_ha: None or string or list of (None or string) instances

        Descriptions of how labels should be aligned horizontally. Either provide a single specification or a list of specs matching or exceeding the number of labels expected. Each spec should be: ‘right’, ‘left’, or ‘center’. None (either as a scalar or an item in the list) will give default alignment for the affected item(s).

      • label_va: None or string or list of (None or string) instances

        Descriptions of how labels should be aligned vertically. Either provide a single specification or a list of specs matching or exceeding the number of labels expected. Each spec should be: ‘top’, ‘bottom’, ‘center’, ‘baseline’, or ‘center_baseline’. None (either as a scalar or an item in the list) will give default alignment for the affected item(s).

      • label_r_shift: float or float array/list.

        Add an offset to the R coordinates of all text labels for the current hardware system. (in data units, which would normally be m) Scalar: add the same offset to all labels. Iterable: Each label can have its own offset.

        If the list/array of offsets is too short, it will be padded with 0s.

      • label_z_shift: float or float array/list

        Add an offset to the Z coordinates of all text labels for the current hardware system (in data units, which would normally be m) Scalar: add the same offset to all labels. Iterable: Each label can have its own offset.

        If the list/array of offsets is too short, it will be padded with 0s.

      • Additional keywords are passed to the function that does the drawing; usually matplotlib.axes.Axes.plot().

Returns

axes handler

plot_pellets_trajectory_CX(*args, **kw)

Plot pellets trajectory in poloidal cross-section

Parameters
  • ods – input ods

  • time_index – int, list of ints, or None time slice to plot. If None all timeslices are plotted.

  • time – float, list of floats, or None time to plot. If None all timeslicess are plotted. if not None, it takes precedence over time_index

  • ax – axes to plot in (active axes is generated if ax is None)

  • kw – arguments passed to matplotlib plot statements

Returns

axes handler

plot_pellets_trajectory_CX_topview(*args, **kw)

Plot pellet trajectory in toroidal cross-section

Parameters
  • ods – input ods

  • time_index – int, list of ints, or None time slice to plot. If None all timeslices are plotted.

  • time – float, list of floats, or None time to plot. If None all timeslicess are plotted. if not None, it takes precedence over time_index

  • ax – axes to plot in (active axes is generated if ax is None)

  • kw – arguments passed to matplotlib plot statements

Returns

axes handler

plot_pf_active_data(*args, **kw)

plot pf_active time traces

Parameters
  • equilibrium_constraints – plot equilibrium constraints if present

  • ax – Axes instance [optional] axes to plot in (active axes is generated if ax is None)

  • **kw – Additional keywords for plot

Returns

axes instance

plot_pf_active_overlay(*args, **kw)

Plots overlays of active PF coils. INCOMPLETE: only the oblique geometry definition is treated so far. More should be added later.

Parameters
  • ods – OMAS ODS instance

  • ax – axes instance into which to plot (default: gca())

  • **kw

    Additional keywords scalex, scaley: passed to ax.autoscale_view() call at the end

    • Accepts standard omas_plot overlay keywords listed in overlay() documentation: mask, labelevery, …

    • Remaining keywords are passed to matplotlib.patches.Polygon call

      Hint: you may want to set facecolor instead of just color

plot_position_control_overlay(*args, **kw)

Overlays position_control data

Parameters
  • ods – ODS instance Must contain langmuir_probes with embedded position data

  • ax – Axes instance

  • t – float Time to display in seconds. If not specified, defaults to the average time of position control samples.

  • xpoint_marker – string Matplotlib marker spec for X-point target(s)

  • strike_marker – string Matplotlib marker spec for strike point target(s)

  • labels – list of strings [optional] Override default point labels. Length must be long enough to cover all points.

  • show_measured_xpoint – bool In addition to the target X-point, mark the measured X-point coordinates.

  • measured_xpoint_marker – string Matplotlib marker spec for X-point measurement(s)

  • **kw

    Additional keywords.

    • Accepts standard omas_plot overlay keywords listed in overlay() documentation: mask, labelevery, …

    • Others will be passed to the plot() call for drawing shape control targets

plot_pulse_schedule_overlay(*args, **kw)

Overlays relevant data from pulse_schedule, such as position control

Parameters
  • ods – ODS instance Must contain langmuir_probes with embedded position data

  • ax – Axes instance

  • t – float Time in s

  • **kw

    Additional keywords.

    • Accepts standard omas_plot overlay keywords listed in overlay() documentation: mask, labelevery, …

    • Others will be passed to the plot() calls.

plot_quantity(*args, **kw)

Provides convenient way to plot 1D quantities in ODS

For example:
>>> ods.plot_quantity('@core.*elec.*dens', '$n_e$', lw=2)
>>> ods.plot_quantity('@core.*ion.0.*dens.*th', '$n_D$', lw=2)
>>> ods.plot_quantity('@core.*ion.1.*dens.*th', '$n_C$', lw=2)
Parameters
  • ods – ODS instance

  • key – ODS location or search pattern

  • yname – name of the y quantity

  • xname – name of the x quantity

  • yunits – units of the y quantity

  • xunits – units of the x quantity

  • ylabel – plot ylabel

  • xlabel – plot xlabel

  • ynorm – normalization factor for y

  • xnorm – normalization factor for x

  • label – label for the legend

  • ax – axes instance into which to plot (default: gca())

  • **kw – extra arguments are passed to the plot function

Returns

axes instance

plot_summary(*args, **kw)

Plot summary time traces. Internally makes use of plot_quantity method.

Parameters
  • ods – input ods

  • fig – figure to plot in (a new figure is generated if fig is None)

  • quantity – if None plot all time-dependent global_quantities. Else a list of strings with global quantities to plot

Returns

list of axes

plot_tf_b_field_tor_vacuum_r_data(*args, **kw)

plot b_field_tor_vacuum_r time trace and equilibrium constraint

Parameters
  • equilibrium_constraints – plot equilibrium constraints if present

  • ax – Axes instance [optional] axes to plot in (active axes is generated if ax is None)

  • **kw – Additional keywords for plot

Returns

axes instance

plot_thomson_scattering_overlay(*args, **kw)

Overlays Thomson channel locations

Parameters
  • ods – OMAS ODS instance

  • ax – axes instance into which to plot (default: gca())

  • **kw

    Additional keywords for Thomson plot:

    • Accepts standard omas_plot overlay keywords listed in overlay() documentation: mask, labelevery, …

    • Remaining keywords are passed to plot call

plot_wall_overlay(*args, **kw)

Plot walls on a tokamak cross section plot

Parameters
  • ods – OMAS ODS instance

  • ax – axes instance into which to plot (default: gca())

  • component_index – list of index of components to plot

  • types – list with one or more of [‘limiter’,’mobile’,’vessel’]

  • unit_index – list of index of units of the component to plot

Returns

axes handler

plot_waves_beam_CX(*args, **kw)

Plot waves beams in poloidal cross-section

Parameters
  • ods – input ods

  • time_index – int, list of ints, or None time slice to plot. If None all timeslices are plotted.

  • time – float, list of floats, or None time to plot. If None all timeslicess are plotted. if not None, it takes precedence over time_index

  • ax – axes to plot in (active axes is generated if ax is None)

  • kw – arguments passed to matplotlib plot statements

Returns

axes handler

plot_waves_beam_profile(*args, **kw)

Plot 1d profiles of waves beams given quantity

Parameters
  • ods – input ods

  • time_index – int, list of ints, or None time slice to plot. If None all timeslices are plotted.

  • time – float, list of floats, or None time to plot. If None all timeslicess are plotted. if not None, it takes precedence over time_index

  • quantity – quantity to plot

  • ax – axes to plot in (active axes is generated if ax is None)

  • kw – arguments passed to matplotlib plot statements

Returns

axes handler

plot_waves_beam_summary(*args, **kw)

Plot waves beam summary: CX, power_density, and current_parallel_density

Parameters
  • ods – input ods

  • time_index – int, list of ints, or None time slice to plot. If None all timeslices are plotted.

  • time – float, list of floats, or None time to plot. If None all timeslicess are plotted. if not None, it takes precedence over time_index

  • fig – figure to plot in (a new figure is generated if fig is None)

  • kw – arguments passed to matplotlib plot statements

Returns

figure handler

pop(k[, d])v, remove specified key and return the corresponding value.

If key is not found, d is returned if given, otherwise KeyError is raised.

popitem()(k, v), remove and return some (key, value) pair

as a 2-tuple; but raise KeyError if D is empty.

pretty_paths(*args, **kw)

Traverse the ods and return paths that have data formatted nicely

Parameters

**kw – extra keywords passed to the path() method

Returns

list of paths that have data formatted nicely

prune(*args, **kw)

Prune ODS branches that are leafless

Returns

number of branches that were pruned

relax(*args, **kw)

Blend floating point data in this ODS with corresponding floating point in other ODS

Parameters
  • other – other ODS

  • alpha – relaxation coefficient this_ods * (1.0 - alpha) + other_ods * alpha

Returns

list of paths that have been blended

same_init_ods(*args, **kw)

Initializes a new ODS with the same attributes as this one

Returns

new ODS

sample(*args, **kw)

Populates the ods with sample data

Parameters
  • ntimes – number of time slices to generate

  • homogeneous_time – only return samples that have ids_properties.homogeneous_time either True or False

Returns

self

sample_bolometer(*args, **kw)

Adds fake bolometer chord locations This method operates in place

Parameters
  • ods – ODS instance

  • nc – 10 # Number of fake channels to make up for testing

Returns

ODS instance with fake bolometer hardware information added

sample_charge_exchange(*args, **kw)

Adds fake CER channel locations This method operates in-place

Parameters
  • ods – ODS instance

  • nc – Number of channels to add

Returns

ODS instance with fake CER hardware information added

sample_core_profiles(*args, **kw)

Add sample core_profiles data This method operates in in-place.

Parameters
  • ods – ODS instance

  • time_index – int

  • add_junk_ion – bool Flag for adding a junk ion for testing how well functions tolerate problems. This will be missing labels, etc.

  • include_pressure – bool Include pressure profiles when temperature and density are added

Returns

ODS instance with profiles added

sample_core_sources(*args, **kw)

Add sample core_profiles data This method operates in in-place.

Parameters
  • ods – ODS instance

  • time_index – int

Returns

ODS instance with sources added

sample_core_transport(*args, **kw)

Add sample core_profiles data This method operates in in-place

Parameters
  • ods – ODS instance

  • time_index – int

Returns

ODS instance with sources added

sample_dataset_description(*args, **kw)

None

sample_ec_launchers(*args, **kw)

Adds fake ec launchers data to support testing This method operates in in-place

Parameters
  • ods – ODS instance

  • ngyros – number of gyrotrons

  • ntimes – number of times

Returns

ODS instance with added ec_launchers

sample_equilibrium(*args, **kw)

Add sample equilibrium data This method operates in in-place.

Parameters
  • ods – ODS instance

  • time_index – int Under which time index should fake equilibrium data be loaded?

  • include_profiles – bool Include 1D profiles of pressure, q, p’, FF’ They are in the sample set, so not including them means deleting them.

  • include_phi – bool Include 1D and 2D profiles of phi (toroidal flux, for calculating rho) This is in the sample set, so not including it means deleting it.

  • include_psi – bool Include 1D and 2D profiles of psi (poloidal flux) This is in the sample set, so not including it means deleting it.

  • include_wall – bool Include the first wall This is in the sample set, so not including it means deleting it.

  • include_q – bool Include safety factor This is in the sample set, so not including it means deleting it.

  • include_xpoint – bool Include X-point R-Z coordinates This is not in the sample set, so including it means making it up

Returns

ODS instance with equilibrium data added

sample_gas_injection(*args, **kw)

Adds fake gas injection locations This method operates in place

Parameters

ods – ODS instance

Returns

ODS instance with fake gas injection hardware information added

sample_ic_antennas(*args, **kw)

Add sample ic_antennas data This method operates in in-place.

Parameters

ods – ODS instance

Returns

ODS instance with profiles added

sample_interferometer(*args, **kw)

Adds fake interferometer locations This method operates in place

Parameters

ods – ODS instance

Returns

ODS instance with fake interferometer hardware information addedd

sample_langmuir_probes(*args, **kw)

Adds fake Langmuir probe locations This method operates in place

Parameters

ods – ODS instance

Returns

ODS instance with fake Langmuir probe hardware information added

sample_magnetics(*args, **kw)

Adds fake magnetic probe locations This method operates in place

Parameters

ods – ODS instance

Returns

ODS instance with fake magnetics hardware information added

sample_nbi(*args, **kw)

Adds fake nbi data to support testing This method operates in in-place

Parameters
  • ods – ODS instance

  • nunits – number of times

  • ntimes – number of times

Returns

ODS instance with added nbi

sample_pf_active(*args, **kw)

Adds fake active PF coil locations This method operates in in-place

Parameters
  • ods – ODS instance

  • nc_weird – int Number of coils with badly defined geometry to include for testing plot overlay robustness

  • nc_undefined – int Number of coils with undefined geometry_type (But valid r, z outlines) to include for testing plot overlay robustness.

Returns

ODS instance with PF active hardware information added

sample_pulse_schedule(*args, **kw)

Adds fake control target data to support testing This method operates in in-place

Parameters

ods – ODS instance

Returns

ODS instance with added pulse schedule

sample_summary(*args, **kw)

Add sample core_profiles data This method operates in in-place

Parameters

ods – ODS instance

Returns

ODS instance with sources added

sample_thomson_scattering(*args, **kw)

Adds fake Thomson scattering channel locations This method operates in place

Parameters
  • ods – ODS instance

  • nc – Number of channels to add.

Returns

ODS instance with fake Thomson hardware information added

sample_wall(*args, **kw)

Adds fake wall data This method operates in in-place

Parameters

ods – ODS instance

Returns

ODS instance with added wall description

satisfy_imas_requirements(*args, **kw)

Assign .time and .ids_properties.homogeneous_time info for top-level structures since these are required for writing an IDS to IMAS

Parameters
  • attempt_fix – fix dataset_description and wall IDS to have 0 times if none is set

  • raise_errors – raise errors if could not satisfy IMAS requirements

Returns

True if all is good, False if requirements are not satisfied, None if fixes were applied

search_paths(*args, **kw)

Find ODS locations that match a pattern

Parameters
  • search_pattern – regular expression ODS location string

  • n – raise an error if a number of occurrences different from n is found

  • regular_expression_startswith – indicates that use of regular expressions in the search_pattern is preceeded by certain characters. This is used internally by some methods of the ODS to force users to use ‘@’ to indicate access to a path by regular expression.

Returns

list of ODS locations matching search_pattern pattern

set_time_array(*args, **kw)

Convenience function for setting time dependent arrays

Parameters
  • key – ODS location to edit

  • time_index – time index of the value to set

  • value – value to set

Returns

time dependent array

setdefault(*args, **kw)

Set value if key is not present

Parameters
  • key – ods location

  • value – value to set

Returns

value

setraw(*args, **kw)

Method to assign data to an ODS with no processing of the key, and it is thus faster than the ODS.__setitem__(key, value) Effectively behaves like a pure Python dictionary/list __setitem__. This method is mostly meant to be used in the inner workings of the ODS class.

Parameters
  • key – string, integer or a list of these

  • value – value to assign

Returns

value

slice_at_time(*args, **kw)

method for selecting a time slice from an time-dependent ODS (NOTE: this method operates in place)

Parameters
  • time – time value to select

  • time_index – time index to select (NOTE: time_index has precedence over time)

Returns

modified ODS

time(*args, **kw)

Return the time information for a given ODS location

Parameters
  • key – ods location

  • extra_info – dictionary that will be filled in place with extra information about time

Returns

time information for a given ODS location (scalar or array)

time_index(*args, **kw)

Return the index of the closest time-slice for a given ODS location

Parameters
  • time – time in second

  • key – ods location

Returns

index (integer) of the closest time-slice

to_odx(*args, **kw)

Generate a ODX from current ODS

Parameters

homogeneous

  • False: flat representation of the ODS

    (data is not collected across arrays of structures)

  • ’time’: collect arrays of structures only along the time dimension

    (always valid for homogeneous_time=True)

  • ’full’: collect arrays of structures along all dimensions
    (may be valid in many situations, especially related to

    simulation data with homogeneous_time=True and where for example number of ions, sources, etc. do not vary)

  • None: smart setting, uses homogeneous=’time’ if homogeneous_time=True else False

Returns

ODX

update(*args, **kw)

Adds ods2’s key-values pairs to the ods

Parameters

ods2 – dictionary or ODS to be added into the ODS

values(*args, **kw)

None

xarray(*args, **kw)

Returns data of an ODS location and correspondnig coordinates as an xarray dataset Note that the Dataset and the DataArrays have their attributes set with the ODSs structure info

Parameters

key – ODS location

Returns

xarray dataset

omfit_classes.omfit_omas.pprint_imas_data_dictionary_info(location)[source]

pretty print IMAS data dictionary info

Parameters

location – location in IMAS data dictionary

omfit_omas_d3d

Functions for adding DIII-D hardware description data to the IMAS schema by writing data to ODS instances

class omfit_classes.omfit_omas_d3d.OMFITd3dcompfile(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

OMFIT class to read DIII-D compensation files such as btcomp.dat ccomp.dat and icomp.dat

OMFIT class to parse DIII-D MHD device files

Parameters
  • filename – filename

  • **kw – arguments passed to __init__ of OMFITascii

load()[source]
omfit_classes.omfit_omas_d3d.setup_gas_injection_hardware_description_d3d(ods, shot)[source]

Sets up DIII-D gas injector data.

R and Z are from the tips of the arrows in puff_loc.pro; phi from angle listed in labels in puff_loc.pro . I recorded the directions of the arrows on the EFITviewer overlay, but I don’t know how to include them in IMAS, so I commented them out.

Warning: changes to gas injector configuration with time are not yet included. This is just the best picture I could make of the 2018 configuration.

Data sources: EFITVIEWER: iris:/fusion/usc/src/idl/efitview/diagnoses/DIII-D/puff_loc.pro accessed 2018 June 05, revised 20090317 DIII-D webpage: https://diii-d.gat.com/diii-d/Gas_Schematic accessed 2018 June 05 DIII-D wegpage: https://diii-d.gat.com/diii-d/Gas_PuffLocations accessed 2018 June 05

Updated 2018 June 05 by David Eldon

Returns

dict Information or instructions for follow up in central hardware description setup

omfit_classes.omfit_omas_d3d.setup_pf_active_hardware_description_d3d(ods, *args)[source]

Adds DIII-D tokamak poloidal field coil hardware geometry to ODS :param ods: ODS instance

Parameters

*args – catch unused args to allow a consistent call signature for hardware description functions

Returns

dict Information or instructions for follow up in central hardware description setup

omfit_classes.omfit_omas_d3d.setup_interferometer_hardware_description_d3d(ods, shot)[source]

Writes DIII-D CO2 interferometer chord locations into ODS.

The chord endpoints ARE NOT RIGHT. Only the R for vertical lines or Z for horizontal lines is right.

Data sources: DIII-D webpage: https://diii-d.gat.com/diii-d/Mci accessed 2018 June 07 D. Eldon

Parameters
  • ods – an OMAS ODS instance

  • shot – int

Returns

dict Information or instructions for follow up in central hardware description setup

omfit_classes.omfit_omas_d3d.setup_thomson_scattering_hardware_description_d3d(ods, shot, revision='BLESSED')[source]

Gathers DIII-D Thomson measurement locations from MDSplus and loads them into OMAS

Parameters

revision – string Thomson scattering data revision, like ‘BLESSED’, ‘REVISIONS.REVISION00’, etc.

Returns

dict Information or instructions for follow up in central hardware description setup

omfit_classes.omfit_omas_d3d.setup_charge_exchange_hardware_description_d3d(ods, shot, analysis_type='CERQUICK')[source]

Gathers DIII-D CER measurement locations from MDSplus and loads them into OMAS

Parameters

analysis_type – string CER analysis quality level like CERQUICK, CERAUTO, or CERFIT. CERQUICK is probably fine.

Returns

dict Information or instructions for follow up in central hardware description setup

omfit_classes.omfit_omas_d3d.setup_bolometer_hardware_description_d3d(ods, shot)[source]

Load DIII-D bolometer chord locations into the ODS

Data sources: - iris:/fusion/usc/src/idl/efitview/diagnoses/DIII-D/bolometerpaths.pro - OMFIT-source/modules/_PCS_prad_control/SETTINGS/PHYSICS/reference/DIII-D/bolometer_geo , access 2018June11 Eldon

Returns

dict Information or instructions for follow up in central hardware description setup

omfit_classes.omfit_omas_d3d.setup_langmuir_probes_hardware_description_d3d(ods, shot)[source]

Load DIII-D Langmuir probe locations into an ODS

Parameters
  • ods – ODS instance

  • shot – int

Returns

dict Information or instructions for follow up in central hardware description setup

omfit_classes.omfit_omas_d3d.find_active_d3d_probes(shot, allowed_probes=None)[source]

Serves LP functions by identifying active probes (those that have actual data saved) for a given shot

Sorry, I couldn’t figure out how to do this with a server-side loop over all the probes, so we have to loop MDS calls on the client side. At least I resampled to speed that part up.

This could be a lot faster if I could figure out how to make the GETNCI commands work on records of array signals.

Parameters
  • shot – int

  • allowed_probes – int array Restrict the search to a certain range of probe numbers to speed things up These are the numbers of storage trees in MDSplus, not the physical probe numbers

Returns

list of ints

omfit_classes.omfit_omas_d3d.load_data_langmuir_probes_d3d(ods, shot, probes=None, allowed_probes=None, tstart=0, tend=0, dt=0.0002, overwrite=False, quantities=None)[source]

Downloads LP probe data from MDSplus and loads them to the ODS

Parameters
  • ods – ODS instance

  • shot – int

  • probes – int array-like [optional] Integer array of DIII-D probe numbers. If not provided, find_active_d3d_probes() will be used.

  • allowed_probes – int array-like [optional] Passed to find_active_d3d_probes(), if applicable. Improves speed by limiting search to a specific range of probe numbers.

  • tstart – float Time to start resample (s)

  • tend – float Time to end resample (s) Set to <= tstart to disable resample Server-side resampling does not work when time does not increase monotonically, which is a typical problem for DIII-D data. Resampling is not recommended for DIII-D.

  • dt – float Resample interval (s) Set to 0 to disable resample Server-side resampling does not work when time does not increase monotonically, which is a typical problem for DIII-D data. Resampling is not recommended for DIII-D.

  • overwrite – bool Download and write data even if they already are present in the ODS.

  • quantities – list of strings [optional] List of quantities to gather. None to gather all available. Options are: ion_saturation_current, heat_flux_parallel, n_e, t_e’, surface_area_effective, v_floating, and b_field_angle

Returns

ODS instance The data are added in-place, so catching the return is probably unnecessary.

omfit_omas_east

Functions for adding EAST hardware description data to the IMAS schema by writing data to ODS instances

omfit_classes.omfit_omas_east.setup_pf_active_hardware_description_east(ods, *args)[source]

Adds EAST tokamak poloidal field coil hardware geometry to ODS :param ods: ODS instance

Parameters

*args – catch unused args to allow a consistent call signature for hardware description functions

Returns

dict Information or instructions for follow up in central hardware description setup

omfit_classes.omfit_omas_east.east_coords_along_wall(s, rlim, zlim, surface)[source]

Transforms s into R, Z. Useful for finding LP locations

Parameters
  • s – numeric Distance along the wall from a reference point (m)

  • rlim – 1D array R coordinates along the limiting surface (m)

  • zlim – Z coordinates along the limiting surface (m)

  • surface – str Which surface / reference should be used? ‘uo’, ‘ui’, ‘lo’, or ‘li’

Returns

(R, Z) R value(s) corresponding to s value(s) in m Z value(s) slim (S values corresponding to rlim and zlim)

omfit_classes.omfit_omas_east.setup_langmuir_probes_hardware_description_east(ods, shot=None)[source]

Load EAST Langmuir probe locations into an ODS

Parameters
  • ods – ODS instance

  • shot – int Will try to fill in from ODS machine data if None

Returns

dict Information or instructions for follow up in central hardware description setup

omfit_omas_kstar

Functions for adding KSTAR data to the IMAS schema by writing to ODS instances

omfit_classes.omfit_omas_kstar.find_active_kstar_probes(shot, allowed_probes=None)[source]

Serves LP functions by identifying active probes (those that have actual data saved) for a given shot

Sorry, I couldn’t figure out how to do this with a server-side loop over all the probes, so we have to loop MDS calls on the client side. At least I resampled to speed that part up.

This could be a lot faster if I could figure out how to make the GETNCI commands work on records of array signals.

Parameters
  • shot – int

  • allowed_probes – int array Restrict the search to a certain range of probe numbers to speed things up

Returns

list of ints

omfit_classes.omfit_omas_kstar.load_data_langmuir_probes_kstar(ods, shot, probes=None, allowed_probes=None, tstart=0, tend=10, dt=0.0002, overwrite=False, quantities=None)[source]

Downloads LP probe data from MDSplus and loads them to the ODS

Parameters
  • ods – ODS instance

  • shot – int

  • probes – int array-like [optional] Integer array of KSTAR probe numbers. If not provided, find_active_kstar_probes() will be used.

  • allowed_probes – int array-like [optional] Passed to find_active_kstar_probes(), if applicable. Improves speed by limiting search to a specific range of probe numbers.

  • tstart – float Time to start resample (s)

  • tend – float Time to end resample (s) Set to <= tstart to disable resample

  • dt – float Resample interval (s) Set to 0 to disable resample

  • overwrite – bool Download and write data even if they already are present in the ODS.

  • quantities – list of strings [optional] List of quantities to gather. None to gather all available. Options are: ion_saturation_current Since KSTAR has only one option at the moment, this keyword is ignored, but is accepted to provide a consistent call signature compared to similar functions for other devices.

Returns

ODS instance The data are added in-place, so catching the return is probably unnecessary.

omfit_classes.omfit_omas_kstar.setup_langmuir_probes_hardware_description_kstar(ods, shot)[source]

Load KSTAR Langmuir probe locations into an ODS

Parameters
  • ods – ODS instance

  • shot – int

Returns

dict Information or instructions for follow up in central hardware description setup

omfit_omas_utils

omfit_classes.omfit_omas_utils.add_generic_OMFIT_info_to_ods(ods, root=None)[source]

This function will fill in information in the ids_properties and code section of the input ods and return the updated ods

Parameters
  • ods – An omas ods instance

  • root – An OMFITmodule, from which to extract the commit

Returns

The updated ods

omfit_classes.omfit_omas_utils.add_experiment_info_to_ods(ods, root=None)[source]

This function will fill in information in the info about the machine/pulse

Parameters
  • ods – An omas ods instance

  • root – An OMFITmodule, from which to extract the machine/pulse info

Returns

The updated ods

omfit_classes.omfit_omas_utils.ensure_consistent_experiment_info(ods, device, shot)[source]

Ensures that the ODS, device, and shot are consistent

If machine or pulse are not set, they are set to the provided device / shot. If they are set but inconsistent, AssertionError is raised.

Parameters
  • ods – ODS instance

  • device – str

  • shot – int

omfit_classes.omfit_omas_utils.verify_ods_eq_code_params_ts(ods)[source]

Ensures that the ODS contains code.params.equilibrium.time_slice

If any are missing an AssertionError is raised.

Parameters

ods – ODS instance

omfit_classes.omfit_omas_utils.setup_hardware_overlay_cx(device, pulse=None, time=None, efitid='EFIT01', geqdsk=None, aeqdsk=None, meqdsk=None, keqdsk=None, overwrite=False, default_load=True, minimal_eq_data=True, no_empty=False, **kw)[source]

Sets up an OMAS ODS so that it can power a cross section view with diagnostic/hardware overlays. This involves looking up and writing locations of various hardware to the ODS.

Parameters
  • device – string Which tokamak?

  • pulse – int Which pulse number? Used for gathering equilibrium and for looking up hardware configuration as it could vary with time.

  • time – int or float array [optional] Time (ms) within the pulse, used for looking up equilibrium only If None and no gEQDSKs, try to get all times from MDSplus

  • efitid – string EFIT SNAP file or ID tag in MDS plus: used for looking up equilibrium only

  • geqdsk – OMFITgeqdsk instance or dict-like containing OMFITgeqdsk instance(s) (optional) Provides EFIT instead of lookup using device, pulse, time, efitid. efitid will be ignored completely. device and pulse will still be used to look up hardware configuration. time might be used. Providing inconsistent data may produce confusing plots.

  • aeqdsk – OMFITaeqdsk instance or dict-like containing OMFITaeqdsk instance(s) (optional) Provides an option to load aeqdsk data to OMAS. Requirements: - geqdsk(s) are being used as the source for basic data - aeqdsk shot and all aeqdsk times match geqdsk shot and times exactly - OMFITaeqdsk has a to_omas() method (not implemented yet as of 2021-11-12)

  • meqdsk – OMFITmeqdsk instance or dict-like containing OMFITmeqdsk instance(s) (optional) Provides an option to load meqdsk data to OMAS. Requirements: - geqdsk(s) are being used as the source for basic data - meqdsk shot and all meqdsk times match geqdsk shot and times exactly

  • keqdsk – OMFITkeqdsk instance or dict-like containing OMFITkeqdsk instance(s) (optional) Provides an option to load meqdsk data to OMAS. Requirements: - geqdsk(s) are being used as the source for basic data - keqdsk shot and all keqdsk times match geqdsk shot and times exactly

  • overwrite – bool Flag indicating whether it is okay to overwrite locations if they already exist in ODS

  • default_load – bool Default action to take for loading a system. For example, **kw lets you explicitly set gas_injection=False to prevent calling setup_gas_injection_hardware_description. But for systems which aren’t specified, the default action (True/False to load/not load) is controlled by this parameter.

  • minimal_eq_data – bool Skip loading all the equilibrium data needed to recreate GEQDSK files and only get what’s needed for plots.

  • no_empty – bool Filter out equilibrium time-slices that have 0 current or 0 boundary outline points. (this is passed to multi_efit_to_omas())

  • **kw – keywords dictionary Disable gathering/setup of data for individual hardware systems by setting them to False using their names within IMAS. For example: gas_injection=False will prevent the call to setup_gas_injection_hardware_description().

Returns

OMAS ODS instance containing the data you need to draw a cross section w/ diagnostic/hardware overlays.

omfit_classes.omfit_omas_utils.toray_to_omas(ech_nml, uf_azi, uf_pol, uf_pow, ods=None, root=None)[source]

Given 3 ECH UFILEs (azimuthal angle, polar angle, and power), return an ODS with the launcher info filled out

Parameters
  • ech_nml – A TORAY namelist

  • uf_azi – An ECH Ufile containing the azimuthal angle

  • uf_pol – An ECH Ufile containing the polar angle

  • uf_pow – An ECH Ufile containing the power

  • ods – An existing ODS, to which to add the launcher info from the Ufiles

  • root – An OMFITmodule instance (for experiment and generic OMFIT info)

omfit_classes.omfit_omas_utils.nubeam_to_omas(nb_nml, nb_uf=None, ods=None, root=None)[source]

Given a NUBEAM namelist and UFILE, return an ODS with the beam info filled out

Parameters
  • nb_nml – A NUBEAM namelist

  • nb_uf – A NUBEAM Ufile

  • ods – An existing ODS, to which to add the beam info from the namelist and Ufile

  • root – An OMFITmodule instance (for experiment and generic OMFIT info)

omfit_classes.omfit_omas_utils.add_hardware_to_ods(ods, device, pulse, hw_sys, overwrite=False)[source]

Adds a single hardware system’s info to an ODS (operates in place)

Parameters
  • ods – ODS instance

  • device – string

  • pulse – int

  • hw_sys – string

  • overwrite – bool

Returns

updated ods

omfit_classes.omfit_omas_utils.multi_efit_to_omas(device, pulse, efitid, ods=None, minimal=False, aeqdsk_time_diff_tol=0.1, no_empty=False, **kw)[source]

Writes equilibrium data from MDSplus to ODS

Parameters
  • device – string

  • pulse – int

  • efitid – string

  • ods – ODS instance A New ODS will be created if this is None

  • minimal – bool Only gather and add enough data to run a cross-section plot

  • aeqdsk_time_diff_tol – float Time difference in ms to allow between GEQDSK and AEQDSK time bases, in case they don’t match exactly. GEQDSK slices where the closest AEQDSK time are farther away than this will not have AEQDSK data.

  • no_empty – bool Remove empty GEQDSK slices from the result. Sometimes EFITs are loaded with a few invalid/empty slices (0 current, no boundary, all psi points = 0, etc.). This option will discard those slices before loading results into an ODS.

  • **kw – Additional keywords to read_basic_eq_from_mds() But not the quantities to gather as those are set explicitly in this function.

Returns

ODS instance The edit is done in-place, so you don’t have to catch the output if you supply the input ODS On fail: empty ODS is returned

omfit_classes.omfit_omas_utils.pf_coils_to_ods(ods, coil_data)[source]

Transfers poloidal field coil geometry data from a standard format used by efitviewer to ODS.

WARNING: only rudimentary identifies are assigned. You should assign your own identifiers and only rely on this function to assign numerical geometry data.

Parameters
  • ods – ODS instance Data will be added in-place

  • coil_data – 2d array coil_data[i] corresponds to coil i. The columns are R (m), Z (m), dR (m), dZ (m), tilt1 (deg), and tilt2 (deg) This should work if you just copy data from iris:/fusion/usc/src/idl/efitview/diagnoses/<device>/coils.dat (the filenames for the coils vary)

Returns

ODS instance

omfit_classes.omfit_omas_utils.make_sample_ods(device='test', shot=123456, efitid='fakesample')[source]

Generates an ODS with some minimal test data, including a sample equilibrium and at least one hardware system :param device: string :param shot: int :param efitid: string :return: ODS

omfit_classes.omfit_omas_utils.transp_ic_to_omas(tr_in, ods=None, uf_p=None, uf_f=None, root=None)[source]

Convert the TRANSP input namelist, tr_in, to omas

Parameters
  • tr_in – A TRANSP input namelist (TR.DAT)

  • ods – An existing ODS object to update with IC antenna/power info

  • uf_p – A ufile with power traces

  • uf_f – A ufile with frequency traces

  • root – An OMFITmodule instance (for experiment and generic OMFIT info)

omfit_classes.omfit_omas_utils.setup_magnetics_hardware_description_general(ods, pulse, device=None)[source]

Writes magnetic probe locations into ODS.

Parameters
  • ods – an OMAS ODS instance

  • device – string Which tokamak?

  • pulse – int

Returns

dict Information or instructions for follow up in central hardware description setup

omfit_classes.omfit_omas_utils.get_shape_control_points(dev=None, sh=None, times_=None, debug=False, debug_out=None)[source]

Gathers shape control points describing targets for the plasma boundary to intersect and returns them

Parameters
  • dev – string

  • sh – int

  • times – float array [optional] Times to use. If provided, interpolate data. If not, auto-assign using first valid segment.

  • debug – bool

  • debug_out – dict-like [optional]

Returns

tuple (

t: 1d float array (nt): times in ms (actual times used) r: 2d float array (nt * nseg): R coordinate in m of shape control points. Unused points are filled with NaN. z: 2d float array (nt * nseg): Z coordinate in m of shape control points rx: 2d float array (nt * 2): R coordinate in m of X points (2nd one may be NaN) rz: 2d float array (nt * 2): Z of X-points. Order is [bottom, top] rptnames: list of strings (nseg) giving labels for describing ctrl pts zptnames: list of strings (nseg) giving labels for describing ctrl pts list of 1D bool arrays vs. t giving validity of outer bottom, inner bottom, outer top, inner top strike pts

)

omfit_classes.omfit_omas_utils.setup_position_control_hardware_description_general(ods, shot, device=None, limiter_efit_id=None, debug=False, debug_out=None)[source]

Loads DIII-D shape control data into the ODS

Parameters
  • ods – ODS instance

  • shot – int

  • device – string Only works for devices that have pointnames and other info specified. DIII-D should work.

  • limiter_efit_id – string [optional]

  • debug – bool

  • debug_out – dict-like Catches debug output

Returns

dict Information or instructions for follow up in central hardware description setup

omfit_classes.omfit_omas_utils.setup_pulse_schedule_hardware_description_general(ods, pulse, device=None)[source]

Sets up the pulse_schedule ODS, which holds control information

This is a pretty broad category! Not all subsets are supported so far. Sorry.

Parameters
  • ods – ODS instance

  • pulse – int

  • device – string

Returns

dict Information or instructions for follow up in central hardware description setup

omfit_classes.omfit_omas_utils.classify_ods_eq_topology(ods, dn_tolerance=0.001)[source]

Figure out whether the shape is USN, LSN, or limited

Parameters
  • ods – ODS instance

  • dn_tolerance – float Tolerance (in terms of normalized psi difference) for declaring a double null

Returns

1D float array Flag indicating whether each equilibrium slice is LSN (-1), USN (+1), DN (0), or unknown/limited (np.NaN)

omfit_classes.omfit_omas_utils.get_strike_point(ods, in_out='outer', pri_sec='primary')[source]

Identify a specific strike point and get its coordinates

It’s easy to just pick a strike point, or even find inner-upper or outer-lower, but identifying primary-outer or primary-inner is a little harder to do. It’s not clear that there’s any consistent ordering of strike points that would trivialize this process, so this function exists to do it for you.

Parameters
  • ods – ODS instance

  • in_out – str ‘outer’: try to get an outer strike point (default) ‘inner’: try to get an inner strike point

  • pri_sec – str ‘primary’: try to get a strike point connected to the primary X-point (default) ‘secondary’: try to get a strike point connected with a secondary X-point

Returns

tuple time: 1D float array (s) r_strike: 1D float array (m) z_strike: 1D float array (m)

exception omfit_classes.omfit_omas_utils.OmasUtilsBadInput[source]

Bases: ValueError, omfit_classes.exceptions_omfit.doNotReportException

Bad inputs were used to a method in omfit_omas_utils.py

omfit_classes.omfit_omas_utils.orthogonal_distance(ods, r0, z0, grid=True, time_range=None, zf=5, maxstep=0.0005, minstep=1e-05, maxsteps=5000, debug=False)[source]

Calculates the orthogonal distance from some point(s) to the LCFS

Works by stepping along the steepest gradient until the LCFS is reached.

Parameters
  • ods – ODS instance A complete, top level ODS, or selection of time slices from equilibrium. That is, ods[‘equilibrium’][‘time_slice’], if ods is a top level ODS.

  • r0 – 1D float array R coordinates of the point(s) in meters. If grid=False, the arrays must have length matching the number of time slices in equilibrium.

  • z0 – 1D float array Z coordinates of the point(s) in meters. Length must match r0

  • grid – bool Return coordinates on a time-space grid, assuming each point in R-Z is static and given a separate history. Otherwise (grid=False), assume one point is given & it moves in time, so return array will be 1D vs. time.

  • time_range – 2 element numeric iterable [optional] Time range in seconds to use for filtering time_slice in ODS

  • zf – float Zoom factor for upsampling the equilibrium first to improve accuracy. Ignored unless > 1.

  • maxstep – float Maximum step size in m Restraining step size should prevent flying off the true path

  • minstep – float Minimum step size in m Prevent calculation from taking too long by forcing a minimum step size

  • maxsteps – int Maximum number of steps allowed in path tracing. Protection against getting stuck in a loop.

  • debug – bool Returns a dictionary with internal quantities instead of an array with the final answer.

Returns

float array Length of a path orthogonal to flux surfaces from (r0, z0) to LCFS, in meters. If grid=True: 2D float array (time by points) If grid=False: a 1D float array vs. time

omfit_classes.omfit_omas_utils.load_sample_eq_specifications(ods, hrts_gate='any')[source]

Loads a sample of equilibrium shape specifications under pulse_schedule

Parameters
  • ods – ODS instance Data go here

  • hrts_gate – str One of the boundary points is for checking whether the boundary passes through the HRTS range. But DIII-D’s HRTS can be reconfigured to handle three different ranges! So you can select ‘top’: Relevant when the HRTS channels are positioned high (this might be the default position) ‘mid’ ‘low’ ‘any’: goes from the bottom of the ‘low’ range to the top of the ‘top’ range.

class omfit_classes.omfit_omas_utils.GradeEquilibrium(ods, debug_out=None)[source]

Bases: object

Grade’s an equilibrium shape’s conformity to specified shape references

Parameters
  • ods – ODS instance

  • debug_out – dict-like [optional] Provide a place to store debugging output, or provide None to disable

printdq(*args, **kw)[source]
zoomed_eq(slice_index=0, zoom=7)[source]

Returns an upsampled equilibrium, as may be needed for some methods

Parameters
  • slice_index – int Index of the time slice of the equilibrium

  • zoom – zoom / upscaling factor

Returns

tuple 1D float array: r values of the grid 1D float array: z values of the grid 2D float array: psi_N at (r, z)

hires_contour_sk(slice_index=0, psin=1, zoom=10)[source]

Returns r, z points along a high resolution contour at some psi_N value

Requires skimage package, which isn’t required by OMFIT and may not be available. THIS IS BETTER IF IT IS AVAILABLE!

Parameters
  • slice_index – int Index of the time slice of the equilibrium

  • psin – float psi_N of the desired contour

  • zoom – float zoom / upscaling factor

Returns

list Each element is a section of the contour, which might not be connected (e.g. different curve for PFR) Each element is a 2D array, where [:, 0] is R and [:, 1] is Z

hires_contour_omf(slice_index=0, psin=1, zoom=10)[source]

Returns r, z points along a high resolution contour at some psi_N value

Uses OMFIT util functions. 10-50% slower than skimage, but doesn’t require external dependencies. skimage is better if you have it!

Parameters
  • slice_index – int Index of the time slice of the equilibrium

  • psin – float psi_N of the desired contour

  • zoom – float zoom / upscaling factor

Returns

list Each element is a section of the contour, which might not be connected (e.g. different curve for PFR) Each element is a 2D array, where [:, 0] is R and [:, 1] is Z

hires_contour(slice_index=0, psin=1, zoom=10)[source]

Wrapper for picking whether to use omfit or skimage version

find_midplane(slice_index)[source]

Finds the intersection of the boundary with the outboard midplane

Parameters

slice_index – int

Returns

(float, float) R, Z coordinates of the LCFS at the outboard midplane

find_fluxsurface(slice_index=0, midplane_dr=0.005)[source]

Finds contour segments of a flux surface outboard of the midplane by a specific amount

Parameters
  • slice_index – int

  • midplane_dr – float

Returns

list of 2D arrays Sections of the contour for the flux surface The contour may have some disconnected regions in general, but if it is all simply connected, there will be only one element in the list with a single 2D array.

new_fig(slice_index=0)[source]

Prepares a new figure and set of axes with aspect ratio and cornernote set up

plot_hires_eq(slice_index=0, ax=None, psin=None, r_rsep=None, track=None, **plotkw)[source]

Plots the high resolution contours generated for the grading process.

Can be used with track to track whether a flux surface is already shown to avoid redundant overlays of the same thing.

Parameters
  • slice_index – int

  • psin – float

  • r_rsep – float

  • ax – Axes instance

  • track – dict-like [optional] Keeps track of what’s already plotted to avoid redundant overlays of the same flux surfaces

  • plotkw – extra keywords will be passed to plot

plot_flux_gates(slice_index=0, ax=None, track=None)[source]

Plots locations of flux gates

Parameters
  • slice_index – int

  • ax – Axes instance

  • track – dict-like [optional]

plot_boundary_points(slice_index=0, ax=None)[source]

Plots reference or target points along the boundary outline

Parameters
  • slice_index – int

  • ax – Axes instance

plot_x_points(slice_index=0, ax=None)[source]

Plots X-points and reference X-points

Parameters
  • slice_index – int

  • ax – Axes instance

plot(slice_index=0, ax=None)[source]

Plots the equilibrium cross section with shape targets marked

Instead of the standard equilibrium cross section plot supplied by OMAS, the high resolution contours generated for the grading process are shown.

Parameters
  • slice_index – int Which slice should be shown?

  • ax – Axes instance Plot on these axes, if supplied. Otherwise, create a new figure and set up cornernote and axes scale.

grade(simple_x_point_mapping=True)[source]

Primary method for grading equilibrium shape conformity to reference shape.

Results are placed in self.results and also returned

Parameters

simple_x_point_mapping

bool True: Use simple mapping of measured X-points to targets: just find

closest X-point to each target.

False: Try hard to sort X-points into primary (psin=1) and secondary

(psin > 1) and only compare primary targets to primary measurements.

Returns

dict Dictionary of results, with one key per category

grade_x_points(improve_xpoint_measurement=True, simple_map=True, psin_tolerance_primary=0.0005)[source]

Grades conformity of X-point position(s) to specifications

Parameters
  • improve_xpoint_measurement – bool

  • simple_map – bool For each target, just find the closest measured X-point and compare. Requires at least as many measurements as targets and raises OmasUtilsBadInput if not satisfied.

  • psin_tolerance_primary – float Tolerance in psin for declaring an X-point to be on the primary separatrix

Returns

list of dicts One list element per time-slice. Within each dictionary, keys give labels for X-point targets, and values

give normalized distance errors.

grade_boundary_points()[source]

Grades proximity of boundary to target boundary points

We need a high resolution flux surface contour for each time slice, so I think we have to loop and upsample each one individually.

grade_boundary_points_slice(slice_index=0)[source]

Grades proximity of boundary to target boundary points for a specific time-slice of the equilibrium

Strike points are included and treated like other boundary points (except they get a note appended to their name when forming labels).

There can be differences in the exact wall measurements (such as attempts to account for toroidal variation, etc.) that cause a strike point measurement or specification to not be on the reported wall. Since I can’t think of a way to enforce consistent interpretation of where the wall is between the strike point specification and the measured equilibrium, I’ve settled for making sure the boundary goes through the specified strike point. This could go badly if the contour segment passing through the specified strike point were disconnected from the core plasma, such as if it limited or something. I’m hoping to get away with this for now, though.

Parameters

slice_index – int Number of the time-slice to consider for obtaining the LCFS.

grade_gates()[source]

Grades boundary on passing through ‘gates’: pairs of points

grade_gates_slice(slice_index=0)[source]

Grades flux surfaces on passing through ‘gates’ (pairs of points) at a specific time-slice

omfit_onetwo

class omfit_classes.omfit_onetwo.OMFIToutone(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

OMFITobject used to read the ONETWO outone file

Parameters
  • filename – filename passed to OMFITascii class

  • debug – provide verbose statements of parts of the outone file that may be skipped

  • skip_inone – don’t parse the inone section of the outone file as a Namelist (default=True)

  • keyw – keyword dictionary passed to OMFITascii class

load()[source]
combine_times()[source]

In the course of parsing, there could be duplicate times for some quantities These can be combined into a single time

convert_outone_to_nc()[source]

Convert self to a netCDF format file, which is returned from this function

save()[source]

The save method is supposed to be overridden by classes which use OMFITobject as a superclass. If left as it is this method can detect if .filename was changed and if so, makes a copy from the original .filename (saved in the .link attribute) to the new .filename

class omfit_classes.omfit_onetwo.OMFITstatefile(*args, **kwargs)[source]

Bases: omfit_classes.omfit_nc.OMFITnc

Class for handling the netcdf statefile from ONETWO, with streamlining for plotting and summing heating terms

Parameters
  • filename – The location on disk of the statefile

  • verbose – Turn on printing of debugging messages for this object

  • persistent – Upon loading, this class converts some variables from the psi grid to the rho grid, but it only saves these variables back to the statefile if persistent is True, which is slower

volumetric_electron_heating_terms = {'qbeame': [1, 2], 'qdelt': [-1, 11], 'qfuse': [1, 6], 'qione': [-1, 602], 'qohm': [1, 7], 'qrad': [-1, 10], 'qrfe': [1, 3]}
volumetric_ion_heating_terms = {'qbeami': [1, 2], 'qcx': [-1, 305], 'qdelt': [1, 11], 'qfusi': [1, 6], 'qioni': [1, 602], 'qrfi': [1, 5]}
volumetric_electron_particles_terms = {'s2d_e': [1, 0], 'sbeame': [1, 2], 'sion_imp_e': [1, 0], 'sion_thermal_e': [1, 601], 'spellet': [1, 14], 'srecom_e': [1, 602], 'ssaw_e': [1, 0]}
volumetric_momentum_terms = {'storqueb': [1, 1]}
load()[source]

Load the variable names, and convert variables on the psi grid to the rho grid

interp_npsi_vars_to_rho(verbose=False)[source]

Some variables are only defined on the psi grid. Iterpolate these onto the rho grid.

volume_integral(v)[source]

Volume integrate v up to flux surface rho:

/rho                      /rho
|    v dV  =>  4 pi^2 R_0 |      v hcap rho' drho'
/0                        /0
Parameters

v – can be a variable string (key of variable dictionary) or an array on the rho grid

surface_integral(v)[source]

Surface integrate v up to flux surface rho:

/rho                /rho
|    v dS  =>  2 pi |      v hcap rho' drho'
/0                  /0
Parameters

v – can be a variable string (key of variable dictionary) or an array on the rho grid

plot(plotChoice=0)[source]
plotPowerFlows()[source]
plotVolumetricHeating()[source]
get_power_flows()[source]
Returns

Dictionary having non-zero power flow terms, including the total; keys of the dictionary end in e or i to indicate electron or ion heating; units are MW

get_volumetric_heating()[source]
Returns

Dictionary having non-zero heating terms, including the total; keys of the dictionary end in e or i to indicate electron or ion heating; units are MW/m^3

plot_te_ti(styles=['-', '--'], widths=[1, 1], color='b', alpha=1)[source]
plot_chie_chii(styles=['-', '--'], widths=[1, 1], color='b', alpha=1)[source]
plot_Qe_Qi(styles=['-', '--'], widths=[1, 1], color='b', alpha=1)[source]
plot_current(styles=['-', '--', '-.', ':', '-'], widths=[1, 1, 1, 1, 1], color='b', currents=['curden', 'curboot', 'curohm', 'curbeam', 'currf'], alpha=1)[source]
plot_qvalue(styles=['-', '--'], widths=[1, 1], color='b', alpha=1)[source]
plotTransp(color=None, alpha=1.0)[source]
plotSummary(color=None, alpha=1.0)[source]
plotEquilibrium(**kw)[source]
get_psin()[source]

Return the psi_n grid

to_omas(ods=None, time_index=0, update=['summary', 'core_profiles', 'equilibrium', 'core_sources'], clear_sources=True)[source]

Translate ONETWO statefile to OMAS data structure

Parameters
  • ods – input ods to which data is added

  • time_index – time index to which data is added

Update

list of IDS to update from statefile

Returns

ODS

class omfit_classes.omfit_onetwo.OMFIT_dump_psi(*args, **kwargs)[source]

Bases: omfit_classes.omfit_ascii.OMFITascii, omfit_classes.sortedDict.SortedDict

A class for loading the dump_psi.dat file produced by ONETWO when it fails in the contouring

Parameters
  • filename – (str) The filename of the dump_psi.dat file (default: ‘dump_psi.dat’)

  • plot_on_load – (bool) If true, plot the psi filled contour map and the specific problem value of psi

load()[source]
plot(ax=None)[source]

Plot the psi mapping as filled contours, with the problem surface as a contour

Returns

None

class omfit_classes.omfit_onetwo.OMFITiterdbProfiles(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

FREYA profiles data files

load()[source]
save()[source]

The save method is supposed to be overridden by classes which use OMFITobject as a superclass. If left as it is this method can detect if .filename was changed and if so, makes a copy from the original .filename (saved in the .link attribute) to the new .filename

plot()[source]
omfit_classes.omfit_onetwo.ONETWO_beam_params_from_ods(ods, t, device, nml2=None, smooth_power=None, time_avg=70, pinj_min=1.0)[source]

Return the parts of NAMELIS2 that are needed by ONETWO or FREYA to describe the beams

Parameters
  • ods – An ODS object that contains beam information in the nbi IDS

  • t – (ms) The time at which the beam parameters should be determined

  • device – The device this is being set up for

  • nml2 – A namelist object pointing to inone[‘NAMELIS2’], which will be modified in place

  • smooth_power – A tuple of smooth_power, smooth_power_time This ONETWO_beam_params_from_ods function returns beam power for a single time slice. Smoothing the beams allows for accounting for a more time integrated effect from any instantaneous changes of the beam powers. If the smooth_power is not passed in, the beams are still smoothed. If calling this function multiple times for different times, t, for the same shot, then it makes sense to smooth the powers outside of this function. For backward compatibility, if only smooth_power is given then smooth_power_time is assumed to be btime (ods[‘nbi.time’]).

  • time_avg – (ms) If smooth_power is not given, then the beam power is causally smoothed using time_avg for the window_size of smooth_by_convolution. time_avg is also used to determine how far back the beams should be reported as being on

Returns

nml2

omfit_osborne

exception omfit_classes.omfit_osborne.NoFitException(message='', *args, **kw)[source]

Bases: omfit_classes.exceptions_omfit.OMFITexception

class omfit_classes.omfit_osborne.OMFITosborneProfile(*args, **kwargs)[source]

Bases: omfit_classes.omfit_mds.OMFITmds

Class accesses Osborne fits stored in MDS+, and provides convenient methods for accessing the data or fitting functions used in those fits, including handling the covariance of the fitting parameters

Parameters
  • server – The device (really MDS+ archive)

  • treename – The tree where the Osborne-tool profiles are stored

  • shot – The shot number

  • time – The timeid of the profile

  • runid – The runid of the profile

Note that this class assumes that the profiles are stored as [tree][‘PROFDB_PED’][‘P<time>_<runid>’]

property treename
property time
property runid
property shot
load()[source]

Load the MDS+ tree structure

get_raw_data(x_var='rhob', quant='ne')[source]

Get the raw data

Parameters
  • x_var – The mapping of the data (rhob,rhov,psi,R)

  • quant – The quantity for which to return the data of the fit

Returns

x,y tuple * x - mapped radius of data * y - the data (an array of numbers with uncertainties)

plot_raw_data(x_var='rhob', quant='ne', **kw)[source]

Plot the raw data

Parameters
  • x_var – The mapping of the data (rhob,rhov,psi,R)

  • quant – The quantity for which to plot the data of the fit

  • **kw – Keyword dictionary passed to uerrorbar

Returns

The collection instance returned by uerrorbar

calc_spline_fit(x_var='rhob', quant='ne', knots=5)[source]

Calculate a spline fit

Parameters
  • x_var – The mapping of the data (rhob,rhov,psi,R)

  • quant – The quantity for which to calculate a spline fit

  • knots – An integer (autoknotted) or a list of knot locations

get_fit(fit, x_var='rhob', quant='ne', corr=True, x_val=None)[source]

Get the fit, including uncertainties taking into account covariance of parameters

Parameters
  • fit – Which type of fit to retrieve

  • x_var – The mapping of the fit (rhob,rhov,psi,R)

  • quant – The quantity that was fit

  • corr – (bool) Use the covariance of the parameters

  • x_val – If the fit can be evaluated, if x_val is not None, evaluate the fit at these locations

Returns

x,y tuple * x - radius of fit as stored in MDS+ * y - the fit (an array of numbers with uncertainties)

get_fit_deriv(nder, fit, **kw)[source]

Apply deriv(x,y) nder times

Parameters
  • nder – Number of derivatives

  • fit – Which type of fit to apply derivatives to

  • **kw – Keyword arguments passed to get_fit

Returns

x, d^{nder}y / d x^{nder} tuple * x - radius of fit as stored in MDS+ * d^n y / d x^n - The nder’th derivative of the fit

get_fit_params(fit, x_var='rhob', quant='ne', doc=False, covariance=True)[source]

Get the parameters and their uncertainties

Parameters
  • fit – Which type of fit to retrieve (tnh0 is the correct tanhfit for scalings)

  • x_var – The mapping of the fit (rhob,rhov,psi,R)

  • quant – The quantity that was fit

  • doc – (bool) if True print the fit documentation to understand which parameter is which

  • covariance – (bool) return the covariance matrix instead of just the errors

Returns

params, cov – tuple of parameters and their covariance matrix. Errors are the sqrt of diagonals: np.sqrt(np.diag(cov))

plot_fit(fit, x_var='rhob', quant='ne', corr=True, nder=0, **kw)[source]

Plot the fit for the given quant, with the given x-coordinate.

Parameters
  • fit – Which type of fit to plot

  • x_var – The mapping of the fit (rhob,rhov,psi,R)

  • quant – The quantity that was fit

  • corr – Plot with correlated uncertainty parameters

  • nder – The number of derivatives to compute before plotting

Returns

The result of uband

plot_all_fits(x_var='rhob', quant='ne', corr=True, nder=0)[source]

Plot all fits for the given quantity quant and mapping x_var

Parameters
  • x_var – The mapping of the fit (rhob,rhov,psi,R)

  • quant – The quantity that was fit

  • corr – Plot with correlated uncertainty parameters

  • nder – Plot the nder’th derivative of the fit (if nder=0, plot data also)

Returns

None

class omfit_classes.omfit_osborne.OMFITpFile(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

OMFIT class used to interface with Osborne pfiles

Parameters
  • filename – filename passed to OMFITobject class

  • **kw – keyword dictionary passed to OMFITobject class

load()[source]

Method used to load the content of the file specified in the .filename attribute

Returns

None

save()[source]

Method used to save the content of the object to the file specified in the .filename attribute

Returns

None

plot()[source]

Method used to plot all profiles, one in a different subplots

Returns

None

remap(points='ne', **kw)[source]

Remap the disparate psinorm grids for each vaiable onto the same psinorm grid

Parameters
  • npoints – number of points to remap the originally 256 grid - If points is int: make an evenly spaced array. - If points is array: use this as the grid - If points is a string: use the ‘psinorm’ from that item in the pfile (by default ne[‘psinorm’])

  • **kw – addidional keywords are passed to scipy.interpolate.interp1d

return: The entire object remapped to the same psinorm grid

to_omas(ods=None, time_index=0, gEQDSK=None)[source]

translate OMFITpFile class to OMAS data structure

Parameters
  • ods – input ods to which data is added

  • time_index – time index to which data is added

  • gEQDSK – corresponding gEQDSK (if ods does not have equilibrium already

Returns

ods

from_omas(ods=None, time_index=0)[source]

translate OMAS data structure to OMFITpFile

Parameters
  • ods – input ods to take data from

  • time_index – time index to which data is added

Returns

ods

omfit_patch

Contains classes and utility/support functions for parsing DIII-D patch panel files

class omfit_classes.omfit_patch.OMFITpatch(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

Parses DIII-D PCS patch panel files.

Several types of files are recognized: - Type-F: F-coil patch panel in ASCII archival format. - Type-P: F-coil patch panel in binary format. Can be converted to type-F by

changing .patch_type attribute and executing .add_c_coil() method. May be obsolete.

  • Type-I: I&C coil patch panel in ASCII format.

Parameters
  • filename – string [optional if shot is provided] Filename of original source file, including path. This will be preserved as self.source_file, even if the class updates to a temporary file copy. If shot is provided, filename controls output only; no source file is read. In this case, filename need not include path.

  • shot – int [optional if filename is provided] Shot number to use to look up patch data. Must provide patch_type when using shot. If a filename is provided as well, shot will be used for lookup and filename will control output only.

  • patch_type – None or string None lets the class auto-assign, which should be fine. You can force it explicitly if you really want to.

  • debug_topic – string Topic keyword to pass to printd. Allows all printd from the class to be consistent.

  • auto_clean – bool Run cleanup method after parsing. This will remove some problems, but prevent exact recovery of problematic original contents.

  • fpconvert – bool Automatically convert type P files into type F so they can be saved.

  • server – string [optional if running in OMFIT framework] Complete access instruction for server that runs viwhed, like “eldond@iris.gat.com:22”

  • tunnel – string [optional if running in OMFIT framework] Complete access instruction for tunnel used to reach server, like “eldond@cybele.gat.com:2039”. Use empty string ‘’ if no tunnel is needed.

  • work_dir – string [optional if running in OMFIT framework] Local working directory (temporary/scratch) for temporary files related to remote executable calls

  • remote_dir – string [optional if running in OMFIT framework] Remote working directory (temporary/scratch) for temporary files related to remote executable calls

  • default_patch_type – string Patch panel type to assign if auto detection fails, such as if you’re initializing a blank patch panel file. If you are reading a valid patch panel file, auto detection will probably work because it is very good. Please choose from ‘F’ or ‘I’. ‘P’ is a valid patch_type, but it’s read-only so it’s a very bad choice for initializing a blank file to fill in yourself.

  • kw – Other keywords passed to OMFITascii

bin_mark = '\\x00'
check()[source]

Checks for problems and returns a status string that is either ‘OKAY’ or a list of problems.

printq(*args)[source]

Print wrapper for keeping topic consistent within the instance

get_patch_name()[source]

Tries to determine patch panel name

add_c_coil(d_supply=False)[source]

Adds an entry for C-coils to type-F patch panels, which is useful if converted from type-P. Type-P files don’t have a C-coil entry. :param d_supply: bool

True: Add the default D supply on C-coil setup. False: Add default C-coil setup with no F-coil supplies on the C-coils (C-coil entry in F is blank)

cleanup()[source]

Cleans up junk data and blatant inconsistencies. May prevent output from matching buggy input.

save(no_write=False)[source]

Saves file to disk

class omfit_classes.omfit_patch.OMFITpatchObject(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.startup_framework.OMFITobject

Handles objects within a patch panel (such as a single F coil) and keeps indices & names consistent with each other

rail_map = ['E supply', 'VFI', '?', '']
power_supply_map = {-3: '', 0: '', 1: 'D', 2: 'V', 3: 'T1', 4: 'T2', 5: 'HV1', 6: 'HV2', 8: 'D2'}
chopper_map = ['?', 'X1', 'X2', 'X3', 'X4', 'X5', 'X6', 'X7', 'X8', 'X9', 'X10', 'X11', 'X12', 'X13', 'X14', 'X15', 'X16', 'X17', 'X18', 'X19', 'X20', 'HX1', 'HX2', 'HX3', 'HX4', 'HX5', 'HX6', 'HX7', 'HX8', 'HX9', 'HX10', 'HX11', 'HX12', 'HX13', 'HX14', 'HX15', 'HX16', 'RV1', 'RV2', '?39', '?40', '?41', 'FSUP']
direction_map = ['', 'PUSH', 'PULL']
allowed_choppers = {'1A': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 37, 38], '1B': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 37, 38], '2A': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 37, 38], '2B': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 37, 38], '3A': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 37, 38], '3B': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 37, 38], '4A': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 37, 38], '4B': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 37, 38], '5A': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 37, 38], '5B': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 37, 38], '6A': [0, 5, 6, 7, 8, 9, 10, 11, 12, 21, 22, 23, 24, 25, 26, 27, 28, 31, 32, 33, 34, 35, 36], '6B': [0, 5, 6, 7, 8, 9, 10, 11, 12, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 33, 34, 35, 36], '7A': [0, 5, 6, 7, 8, 9, 10, 11, 12, 21, 22, 23, 24, 25, 26, 27, 28, 31, 32, 33, 34, 35, 36], '7B': [0, 5, 6, 7, 8, 9, 10, 11, 12, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 33, 34, 35, 36], '8A': [0, 5, 6, 7, 8, 9, 10, 11, 12, 19, 20, 21, 22, 23, 24, 25, 26, 31, 32, 33, 34, 37, 38], '8B': [0, 5, 6, 7, 8, 9, 10, 11, 12, 19, 20, 21, 22, 23, 24, 25, 26, 33, 34, 35, 36, 37, 38], '9A': [0, 5, 6, 7, 8, 9, 10, 11, 12, 19, 20, 21, 22, 23, 24, 25, 26, 31, 32, 33, 34, 37, 38], '9B': [0, 5, 6, 7, 8, 9, 10, 11, 12, 19, 20, 21, 22, 23, 24, 25, 26, 33, 34, 35, 36, 37, 38], 'D': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 42], 'D2': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29], 'HV1': [21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36], 'HV2': [21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36], 'T1': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 'T2': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27], 'V': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29], 'cl': [0, 42]}
cmc = {'1A': 3, '1B': 3, '2A': 3, '2B': 3, '3A': 3, '3B': 3, '4A': 3, '4B': 3, '5A': 3, '5B': 3, '6A': 6, '6B': 6, '7A': 6, '7B': 6, '8A': 3, '8B': 3, '9A': 4, '9B': 4, 'cl': 1}
chopper_max_counts = {'1A': 3, '1B': 3, '2A': 3, '2B': 3, '3A': 3, '3B': 3, '4A': 3, '4B': 3, '5A': 3, '5B': 3, '6A': 6, '6B': 6, '7A': 6, '7B': 6, '8A': 3, '8B': 3, '9A': 4, '9B': 4, 'cl': 1}
lonely_choppers = [37, 38, 42]
property locked
class omfit_classes.omfit_patch.OMFITpatchList(*args, **kw)[source]

Bases: list

Handles list-like data within patch panels.

__setitem__ notifies parent object of new assignment so values index/name pairs can be kept self consistent. Without this intervention, a list’s element can be changed without changing the list itself and so the parent OMFITpatchObject instance gets no notification, and lists of chopper indices and chopper names can get out of sync.

property parent

omfit_path

class omfit_classes.omfit_path.OMFITpath(filename, **kw)[source]

Bases: omfit_classes.startup_framework.OMFITobject

OMFIT class used to interface with files

Parameters
  • filename – filename passed to OMFITobject class

  • **kw – keyword dictionary passed to OMFITobject class

omfit_pdb

class omfit_classes.omfit_pdb.OMFITpdb(*args, **kwargs)[source]

Bases: omfit_classes.omfit_error.OMFITobjectError

omfit_profiles

class omfit_classes.omfit_profiles.OMFITprofiles(filename, data_vars=None, coords=None, attrs=None, comment='')[source]

Bases: omfit_classes.omfit_data.OMFITncDataset

Data class used by OMFITprofiles, CAKE and other OMFIT modules for storing experimental profiles data

Parameters
  • filename – filename of the NetCDF file where data will be saved

  • data_vars – see xarray.Dataset

  • coords – see xarray.Dataset

  • attrs – see xarray.Dataset

  • comment – String that if set will show in the OMFIT tree GUI

property comment
to_omas(ods=None, times=None)[source]
Parameters

ods – ODS to which data will be appended

Returns

ods

model_tree_quantities(warn=True, no_update=False, details=False)[source]

Returns list of MDS+ model_tree_quantities for all species.

Parameters
  • warn – [bool] If True, the function will warn if some of the model_tree_quantities are missing in OMFIT-source/omfit/omfit_classes/omfit_profiles.py and the model tree should be updated

  • no_update – [bool] If True, the function will return only items that is in the object AND on the model tree, and ignore items that is not in model_tree_quantities.

Returns

list of strings

create_model_tree(server, treename='OMFIT_PROFS')[source]

Generate MDS+ model tree

Parameters
  • server – MDS+ server

  • treename – MDS+ treename

check_attrs(quiet=False)[source]

Checks that basic/standard attrs are present. If not, they will be fill with standby values (usually ‘unknown’) Also checks that ints are ints and not int64, which would prevent json from working properly.

Parameters

quiet – If set to True, the function will not print warnings. By default set to False.

to_mds(server, shot, times=None, treename='OMFIT_PROFS', skip_vars=[], comment=None, tag=None, relaxed=False, commit=True, iri_upload_metadata=None)[source]

This script writes the OMFITproflies datase to DIII-D MDS+ and updates d3drdb accordingly

Parameters
  • server – MDS+ server

  • shot – shot to store the data to

  • treename – MDS+ treename

  • skip_vars – variables to skip uploading. Array-like

  • relaxed – if set to True, then the function will only try to upload vars in the model_tree_quantities list as recorded at the beginging of this file. If False, then this funct will attempt to upload all variables stored in self, and fail if a profile variable cannot be uploaded (usually due there not being a corresponding node on the MDS+ tree).

  • (bool) (commit) – If set to False, the SQL query will not commit the data to the coderunrdb. This is required to be false for a jenkins test or else if it tries to write data to SQL database twice it will throw an error.

  • iri_upload_metadata – optionally, a dictionary with metadata for upload to iri_upload_log table in the code run RDB. Certain metadata are determined dynamically. If None, then it will not be logged to iri_upload_metadata.

Returns

runid, treename

mds_translator(inv=None)[source]

Converts strings OMFITprofiles dataset keys to MDS+ nodes less than 12 chars long

Parameters
  • inv – string to which to apply the transformation if None the transformation is applied to all of the OMFITprofiles.model_tree_quantities for sanity check

  • reverse – reverse the translation. Used to tranlate mds+ node names back to OMFITprofile names

Returns

transformed sting or if inv is None the mapped_model_2_mds and mapped_mds_2_model dictionaries

from_mds(server, runid)[source]
to_pFiles(eq, times=None, shot=None)[source]
Parameters
  • eq – ODS() or dict. (OMFITtree() is a dict) Needs to contain equilibria information, either in the form of the ODS with needed eq already loaded, or as OMFITgeqdsk() objects in the Dict with the time[ms] as keys. Times for the eq need to be strict matches to profiles times coord.

  • times – array like. time for which you would like p files to be generated.

  • shot – int. shot number, only relevant in generating p file names.

Returns

OMFITtree() containing a series of OMFITpfile objs.

get_xkey()[source]

Get the key of the x-coord associated with this data array.

Returns

str. key of the x-coord associated with the profiles, like ‘psi_n’, ‘psi’ or ‘rho’.

Zeff(gas='2H1', ni_to_ne=1, xfit='rho', device='DIII-D', update=True, verbose=False)[source]

Effective charge of plasma.

Formula: Z_{eff} = sum{n_s Z_s^2} / sum{n_s Z_s}

Returns

DataArray.

pressure(species, xfit='rho', name=None, update=True, debug=False)[source]

Species pressure.

Formula: P = sum{n_s T_s}

Parameters
  • species – list. Species included in calculation sum.

  • name – str. subscript for name of result DataArray (i.e. it wil be called p_name)

Returns

DataArray. Pressure (Pa)

inverse_scale_length(key, update=True)[source]

Inverse scale length :param key: str. The variable for which the inverse scale length is computed :return: Dataset

log_lambda(update=True)[source]

The Coulomb logarithm: the ratio of the maximum impact parameter to the classical distance of closest approach in Coulomb scattering. Lambda, the argument, is known as the plasma parameter. Formula: ln Lambda = 17.3 -

rac{1}{2}ln(n_e/10^20)+ rac{3}{2}ln(T_e/eV)

return

collisionality(s, s2='e', eq=None, update=True)[source]

Collisionality from J.D. Huba, “NRL FORMULARY”, 2011. :param s: string. Species. :param s2: string. Colliding species (default elecctrons). Currently not used :return: Dataset

spitzer(update=True)[source]
plasma_frequency(s, relativistic=False, update=True, debug=False)[source]

Calculate plasma frequency. :param s: string. Species. :param relativistic: bool. Make a relativistic correction for m_e (not actually the relativistic mass). :return : Dataset.

gyrofrequency(s, mag_field=None, relativistic=False, update=True)[source]

Calculate gyrofrequency at the LFS midplane. :param s: string. Species. :param mag_field: external structure generated from OMFITlib_general.mag_field_components :param relativistic: bool. Make a relativistic correction for m_e (not actually the relativistic mass). :return : Dataset.

xo_cutoffs(s, mag_field=None, relativistic=True, update=True)[source]

Calculate X-mode R and L cutoffs. Note, O-mode cutoff is already stored as omega_plasma_e. :param s: string. Species. :param relativistic: bool. Make a relativistic correction for m_e (not actually the relativistic mass). :return : Dataset.

diamagnetic_frequencies(spc, update=True)[source]

Calculate the diamagnetic frequency, and its density / temperature components.

Formula: omega_P = -

rac{T}{nZe} rac{dn}{dpsi} - rac{1}{Ze} rac{dT}{dpsi}

param spc

Species for which temperature is fit

param update

bool. Set to true to update self, if False, only returns and does not update self. Gradients, if missing, will always update though.

offset_frequency(s, ms=None, propagate_uncertainties=False, update=True)[source]

Calculate the NTV offset frequency. Formula: omega_NTV0 = - (3.5+/-1.5) *

rac{1}{Ze} rac{dT}{dpsi}

param s

Species for which temperature is fit

param ms

Species to use for m,Z (default to fit species)

radial_field(s, Er_vpol_zero=False, mag_field=None, eq=None, plot_debug_Er_plot=False, xfit='rho', update=True)[source]

Radial electric field. Formula: :param s: Species which will be used to calculate Er :param mag_field: external structure generated from OMFITlib_general.mag_field_components

omega_perp(s, eq=None, update=True)[source]

Perpendicular rotation frequency.

Formula: omega_{perp} = omega_{E} + sigma*omega_P

Parameters

s – str. Species.

Returns

Dataset.

angular_momentum(s, gas='2H1', xfit='rho', update=True)[source]

Angular momentum.

Formula: L_{phi} = m * n * omega_{tor} * <R^2>

Parameters

s – str. Species.

Returns

Dataset.

xderiv(key, coord='psi', update=True)[source]

Returns the derivative of the value corresponding to key on the spatial coordinate coord.

Parameters
  • key – str. The variable

  • coord – str. The radial coordinate with respect to which the derivative is taken

  • update – bool. Set to true to update self, if False, only returns and does not update self.

Returns

Dataset

find_pedestal(data_array, der=True, update=True)[source]

Find the pedestal extent and values.

Parameters
  • data_array – DataArray. 2D fit/self profiles.

  • der – bool. Find extent of high-derivative region (default is to find edge “bump” feature.

Returns

Dataset. Edge feature inner/center/outer radial coordinates and corresponding values.

pedestal_mwidth()[source]

Calculate the width of the pedestal in meters at the magnetic axis “midplane”.

Parameters

key – str. Quantity that the pedestal extent has been calculated for.

Returns

Dataset. Time evolution of pedestal width.

calc_intrinsic_rotation(eq=None, update=True)[source]

Evaluation of the omega_intrinsic function

Returns

omega_intrinsic

P_rad_int(update=True)[source]

Volume integral of the total radiated power

Formula: P_{rad,int} = int{P_{rad}}

Returns

KDG_neoclassical(mag_field=None, xfit='rho', update=True)[source]

Poloidal velocity from Kim, Diamond Groebner, Phys. Fluids B (1991) with poloidal in-out correction based on Ashourvan (2017) :return: Places poloidal velocity for main-ion and impurity on outboard midplane into DERIVED. Places neoclassical approximation for main-ion toroidal flow based on measured impurity flow in self.

get_nclass_conductivity_and_bootstrap(gas='2H1', xfit='rho', device=None, eq=None, debug=False, update=True)[source]

Call the neoclassical conductivity and bootstrap calculations from utils_fusion

Returns

Dataset. COntaining conductivity and bootstrap DataArrays

check_keys(keys=[], name='', print_error=True)[source]

Check to make sure required data is available

reset_coords(names=None, drop=False)[source]

Pass through implementation of Dataset.reset_coords(). Given names of coordinates, convert them to variables. Unlike Dataset.reset_corrds(), however, this function modifies in place!

param names: Names of coords to reset. Cannot be index coords. Default to all non-index coords.

param drop: If True, drop coords instead of converting. Default False.

combine(other, combine_attrs='drop_conflicts')[source]

Pass through implementation of xarray.combine_by_coords. Given another OMFITprofile, it seeks to combine the none conflicting data in the two objects.

param other: Another instance of OMFITprofiles.

param combine_attr: Keyword controlled behavior regarding conflicting attrs (as opposed to vars). Default to

‘drop_conflicts’ where conflicting attrs are dropped from the result. (see xarray.combine_by_coords)

class omfit_classes.omfit_profiles.OMFITprofilesDynamic(filename, fits=None, equilibrium=None, main_ion='2H1', **kw)[source]

Bases: omfit_classes.omfit_data.OMFITncDynamicDataset

Class for dynamic calculation of derived quantities

Examples

Initialize the class with a filename and FIT Dataset. >> tmp=OMFITprofiles(‘test.nc’, fits=root[‘OUTPUTS’][‘FIT’], equilibrium=root[‘OUTPUTS’][‘SLICE’][‘EQ’], root[‘SETTINGS’][‘EXPERIMENT’][‘gas’])

Accessing a quantity will dynamically calculate it. >> print tmp[‘Zeff’]

Quantities are then stored (they are not calculated twice). >> tmp=OMFITprofiles(‘test.nc’,

fits=root[‘OUTPUTS’][‘FIT’], equilibrium=root[‘OUTPUTS’][‘SLICE’][‘EQ’], main_ion=’2H1’)

>> uband(tmp[‘rho’],tmp[‘n_2H1’])

Parameters
  • filename – Path to file

  • lock – Prevent in memory changes to the DataArray entries contained

  • exportDataset_kw – dictionary passed to exportDataset on save

  • data_vars – see xarray.Dataset

  • coords – see xarray.Dataset

  • attrs – see xarray.Dataset

  • **kw – arguments passed to OMFITobject

calc_n_main_ion()[source]

Density of the main ion species. Assumes quasi-neutrality.

Returns

None. Updates the instance’s Dataset in place.

calc_T_main_ion()[source]

Temperature of the main ion species. Assumes it is equal to the measured ion species temperature. If there are multiple impurity temperatures measured, it uses the first one.

Returns

None. Updates the instance’s Dataset in place.

calc_Zeff()[source]

Effective charge of plasma.

Formula: Z_{eff} = sum{n_s Z_s^2} / sum{n_s Z_s}

Returns

None. Updates the instance’s Dataset in place.

calc_Total_Zeff()[source]

Effective charge of plasma.

Formula: Z_{eff} = sum{n_s Z_s^2} / sum{n_s Z_s}

Returns

None. Updates the instance’s Dataset in place.

omfit_classes.omfit_profiles.available_profiles(server, shot, device='DIII-D', verbose=True)[source]

omfit_python

class omfit_classes.omfit_python.OMFITpythonTask(filename, **kw)[source]

Bases: omfit_classes.omfit_python._OMFITpython

Python script for OMFIT tasks

run(**kw)[source]
runNoGUI(**kw)[source]

This method allows execution of the script without invoking TkInter commands Note that the TkInter commands will also be discarded for the OMFITpython scipts that this method calls

estimate_nprocs_from_available_memory()[source]

This method estimates how many prun processes will fit into the memory of the current system. It returns one core less than possible to have a safety margin as processes that do not have enough memory will completely freeze the session.

prun_auto_proc(nsteps, resultNames, **kw)[source]
prun(nsteps, nprocs, resultNames, noGUI=False, prerun='', postrun='', result_type=None, runIDs=None, runlabels=None, no_mpl_pledge=False, **kw)[source]

Parallel execution of OMFITpythonTasks.

>> a=OMFIT[‘test’].prun(10,5,’a’,prerun=”OMFIT[‘c’]=a”,a=np.arange(5)*2+0.5)

Parameters
  • nsteps – number of calls

  • nprocs – number of simultaneous processes; if None, then check SLURM_TASKS_PER_NODE, and then OMP_NUM_THREADS, and finally use 4; the actual number of processors will always be checked against self.estimate_nprocs_from_available_memory and use the that if less

  • resultNames – name, or list of names with the variables that will be returned at the end of the execution of each script

  • noGUI – Disable GUI with gray/blue/green boxes showing progress of parallel run

  • prerun – string that is executed before each parallel execution (useful to set entries in the OMFIT tree)

  • postrun – string that is executed after each parallel execution (useful to gather entries from the OMFIT tree)

  • result_type – class of the object that will contain the prun results (e.g. OMFITtree, OMFITcollection, OMFITmcTree)

  • runIDs – list of strings used to name and retrieve the runs (use numbers by default)

  • runlabels – list of strings used to display the runs (same as runIDs by default)

  • no_mpl_pledge – User pledges not to call any matplotlib plotting commands as part of their scripts. The prun will thus not need to swithch the matplotlib backend, which would close any open figures.

  • **kw – additional keywords will appear as local variables in the user script Local variables that are meant to change between different calls to the script should be passed as lists of length nsteps

Returns

Dictionary containing the results from each script execution

opt(actuators, targets, reset=None, prerun='', postrun='', method='hybr', tol=None, options={'eps': 0.01, 'xtol': 0.01}, postfail='', reset_on_fail=True, **kw)[source]
Execute OMFITpythonTask using inside scipy.optimize.root

Optimizes actuators to achieve targets Any tree location in the reset list is reset on each call to self Tree will be in the state after the final run of self, whether the optimizer converges or not. If there is an exception, tree is reset using the reset list. *See regression/test_optrun.py for example usage

Parameters
  • actuators – dictionary of actuator dictionaries with the following keys ‘set’: function or string tree location to set as actuator ‘init’: initial value for actuator

  • targets – dictionary of target dictionaries with the following keys ‘get’: function or string tree location to get current value ‘target’: value to target ‘tol’: (optional) absolute tolerance of target value

  • reset – list of tree locations to be reset on each iteration of optimization

  • prerun – string that is executed before each execution of self *useful for setting entries in the OMFIT tree

  • postrun – string that is executed after each execution of self *useful for gathering entries from the OMFIT tree

  • method – keyword passed to scipy.optimize.root

  • tol – keyword passed to scipy.optimize.root

  • options – keyword passed to scipy.optimize.root

  • postfail – string that is executed if execution of self throws an error *useful for gathering entries from the OMFIT tree

  • reset_on_fail – reset the tree if an exception or keyboard interrupt occurs

  • **kw – additional keywords passed to self.run()

Returns

OptimizeResult output of scipy.optimize.root, Convergence history of actuators, targets, and errors

importCode(**kw)[source]

Executes the code and returns it as newly generated module

>> myLib=OMFIT[‘test’].importCode() >> print myLib.a

class omfit_classes.omfit_python.OMFITpythonGUI(filename, **kw)[source]

Bases: omfit_classes.omfit_python._OMFITpython

Python script for OMFIT gui

run(_relLoc=None, **kw)[source]
class omfit_classes.omfit_python.OMFITpythonPlot(filename, **kw)[source]

Bases: omfit_classes.omfit_python._OMFITpython

Python script for OMFIT plots. Differently from the OMFITpythonTask class, the OMFITpythonPlot will not refresh the OMFIT GUIs though the OMFIT tree GUI itself will still be updated.

Use .plot() method for overplotting (called by pressing <Shift-Return> in the OMFIT GUI)

Use .plotFigure() method for plotting in new figure (called by pressing <Return> in the OMFIT GUI)

When a single script should open more than one figure, it’s probably best to use objects of the class OMFITpythonTask and manually handling oveplotting and opening of new figures. To use a OMFITpythonTask object for plotting, it’s useful to call the .runNoGUI method which prevents update of the GUIs that are open.

run(**kw)[source]
runNoGUI(**kw)[source]
plot(**kw)[source]

Execute the script and open a new figure only if no figure was already open. Effectively, this will result in an overplot. This method is called by pressing <Shift-Return> in the OMFIT GUI.

Parameters

**kw – keywords passed to the script

plotFigure(*args, **kw)[source]

Execute the script and open a new figure. Effectively, this will result in a new figure. This method is called by pressing <Return> in the OMFIT GUI.

Parameters
  • *args – arguments passed to the figure() command

  • **kw – keywords passed to the script

class omfit_classes.omfit_python.OMFITpythonTest(filename, **kw)[source]

Bases: omfit_classes.omfit_python.OMFITpythonTask

Python script for OMFIT regression tests

tests_list()[source]
Returns

list of available tests

class omfit_classes.omfit_python.parallel_environment(mpl_backend=None)[source]

Bases: object

This environment is used as part of OMFITpythonTask.prun to make it safe for multiprocessing

omfit_classes.omfit_python.execGlobLoc(*args, **kw)[source]
omfit_classes.omfit_python.defaultVars(**kw)[source]

Function used to setup default variables in an OMFIT script (of type OMFITpythonTask, OMFITpythonTest, OMFITpythonGUI, or OMFITpythonPlot) Really the magic function that allows OMFIT scripts to act as a function

Parameters

**kw – keyword parameter dictionary with default value

Returns

dictionary with variables passed by the user

To be used as dfv = defaultVars(var1_i_want_to_define=None, var2_i_want_to_define=None) and then later in the script, one can use var1_i_want_to_define or var2_i_want_to_define

Implications of python passing variables by reference are noted in https://medium.com/@meghamohan/mutable-and-immutable-side-of-python-c2145cf72747

omfit_classes.omfit_python.OMFITworkDir(root=None, server='')[source]

This is a convenience function which returns the string for the working directory of OMFIT modules (remote or local). The returned directory string is compatible with parallel running of modules. The format used is: [server_OMFIT_working_directory]/[projectID]/[mainsettings_runID]/[module_tree_location]-[module_runid]/[p_multiprocessing_folder]/

Parameters
  • root – root of the module (or string)

  • server – remote server. If empty string or None, then the local working directory is returned.

Returns

directory string

omfit_classes.omfit_python.for_all_modules(doThis='', deploy=False, skip=False, these_modules_only=None)[source]

This is a utility function (to be used by module developers) which can be used to execute the same command on all of the modules Note that this script will overwrite the content of your OMFIT tree.

Parameters
  • doThis – python script to execute. In this script the following variables are defined root contains the reference to the current module being processed, moduleID the moduleID, rootName the location of the module in the tree moduleFile the module filename

  • deploy – save the modules back on their original location

  • skip – skip modules that are already in the tree

  • these_modules_only – list of modules ID to process (useful for development of doThis)

Returns

None

omfit_classes.omfit_python.import_all_private(module_str)[source]

This function is used to import all private attributes from a module Can be used in a script like this:

>> locals().update(import_all_private('omfit_classes.OMFITx'))
omfit_classes.omfit_python.import_mayavi(verbose=True)[source]

This function attempts to import mayavi, mayavi.mlab while avoiding known institutional installation pitfalls as well as known tk vs qt backend issues.

Parameters

verbose – bool. Prints a warning message if mayavi can’t be imported

Returns

obj. mayavi if it was successfully imported, None if not

class omfit_classes.omfit_python.omfit_pydocs(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict

destroy()[source]
omfit_classes.omfit_python.threaded_logic(function, result, *args, **kw)[source]

omfit_rabbit

class omfit_classes.omfit_rabbit.OMFITrabbitEq(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

Equilibirium files for RABBIT

load()[source]
plot()[source]
save()[source]

The save method is supposed to be overridden by classes which use OMFITobject as a superclass. If left as it is this method can detect if .filename was changed and if so, makes a copy from the original .filename (saved in the .link attribute) to the new .filename

from_geqdsk(gEQDSK)[source]
save_from_gFile(filename, gEQDSK)[source]
class omfit_classes.omfit_rabbit.OMFITrabbitBeamout(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

Beam output files from RABBIT

load()[source]
plot()[source]
class omfit_classes.omfit_rabbit.OMFITrabbitTimetraces(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

Timetraces input file for RABBIT

load()[source]
save()[source]

The save method is supposed to be overridden by classes which use OMFITobject as a superclass. If left as it is this method can detect if .filename was changed and if so, makes a copy from the original .filename (saved in the .link attribute) to the new .filename

class omfit_classes.omfit_rabbit.OMFITrabbitBeamin(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

Beam input file for RABBIT

load()[source]
save()[source]

The save method is supposed to be overridden by classes which use OMFITobject as a superclass. If left as it is this method can detect if .filename was changed and if so, makes a copy from the original .filename (saved in the .link attribute) to the new .filename

omfit_rdb

class omfit_classes.omfit_rdb.OMFITrdb(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict

Class used to connect to relational databases

Parameters
  • query – string SQL SELECT query

  • db – string database to connect to

  • server – string SQL server to connect to (e.g. d3drdb as listed under OMFIT[‘MainSettings’][‘SERVER’])

  • by_column

    bool False: return results by rows (can be slow for large number of records)

    Result of .select() is a SortedDict with keys numbered 0, 1, … and each holding another SortedDict, within which keys correspond to columns

    True: return results by columns

    Result of .select() is a SortedDict with keys corresponding to columns, each holding an array with length set by the number of rows selected.

property postgresql
select(query=None, by_column=None)[source]

Pass a query to the database, presumably with a SELECT statement in it

Parameters
  • query – string A string such as “SELECT * from profile_runs where shot=126006”; if None then use the query used to instantiate the object

  • by_column – bool [optional] If True or False, override the self.by_column set at instantiation

Returns

SortedDict by_column = False:

Keys are numbered 0, 1, 2, … The value behind each key is a SortedDict The keys of the child SortedDict are the columns selected

by_column = True:

Keys are column names Values are arrays with length set by the number of rows selected

custom_procedure(procedure, commit=True, **arguments)[source]

Pass an arbitrary custom procedure to the sql database

Parameters
  • procedure – string A string that represents the custom procedure to be called

  • (bool) (commit) – If set to False it will not commit the data to the coderunrdb. This should be done when running a jenkins test, otherwise it may attempt to write data to the same shot/runid twice and throw an error.

Returns

Dict Output list of pyodbc rows returned by the custom query

alter_add(table, column_name, column_type, commit=True, verbose=True)[source]

Alter table in SQL database by adding a column

Parameters
  • table – string table name

  • column_name – string column name

  • column_type – string column type, has to be an SQL DataType, e.g. BOOLEAN, FLOAT etc..

  • commit – bool commit alter command to SQL

  • verbose – bool print SQL command being used

alter_drop(table, column_name, commit=True, verbose=True)[source]

Alter table in SQL database by dropping a column

Parameters
  • table – string table name

  • column_name – string column name

  • commit – bool commit alter command to SQL

  • verbose – bool print SQL command being used

Returns

string SQL command

delete(table, where, commit=True, verbose=True)[source]

Delete row(s) in SQL database

Parameters
  • table – string table where to update

  • where – string or dict Which record or records should be deleted NOTE that all records that satisfy this condition will be deleted! A dict will be converted into a string of the form “key1=value1 and key2=value2 …”

  • commit – bool commit delete to SQL

  • verbose – bool print SQL command being used

commit()[source]

Commit commands in SQL database

update(table, data, where, commit=True, overwrite=1, verbose=True)[source]

Update row(s) in SQL database

Parameters
  • table – string Table to update.

  • data – dict Keys are columns to update and values are values to put into those columns.

  • where

    dict or string Which record or records should be updated. NOTE that all records that satisfy this condition will be updated! If it’s a dictionary, the columns/data condition will be concatenated with ” AND “, so that

    {‘my_column’: 5.2, ‘another_col’: 7} becomes “my_column=5.2 AND another_col=7”.

    A string will be used directly.

  • commit – bool Commit update to SQL. Set to false for testing without editing the database.

  • overwrite

    bool or int 0/False: If any of the keys in data already have entries in the table, do not update anything 1/True: Update everything. Don’t even check. 2: If any of the keys in dat already have entries in the table, don’t update those,

    but DO write missing entries.

  • verbose – bool Print SQL command being used.

Returns

string The SQL command that would be used. If the update is aborted due to overwrite avoidance, the SQL command will be prefixed by “ABORT:”

insert(table, data, duplicate_key_update=False, commit=True, verbose=True)[source]

Insert row in SQL database

Parameters
  • table – string table where data will be inserted

  • data – dict, list, or tuple dict: keys are column names list or tuple: dictionary (columns & values) or list/tuple of values

  • duplicate_key_update – bool append ‘ ON DUPLICATE KEY UPDATE’ to INSERT command

  • commit – bool commit insert to SQL

  • verbose – bool print SQL command being used

Returns

The SQL command that was used or would be used

copy_row(table, **kw)[source]

Copy one row of a table to a new row of a table

Parameters
  • table – string table in the database for which to copy a row

  • **kw – The keywords passed must be the primary keys of the table. The values of the keywords must be two-element containers: (copy_from, copy_to)

Returns

string SQL command

primary_keys(table)[source]

Return the keys that are the primary keys of the table

Parameters

table – string table for which to evaluate the primary keys

Returns

list of strings

load()[source]

Connect to the database and retrieve its content

get_databases()[source]

Return a list of databases on the server

get_tables()[source]

Return a list of tables on the given database

get_columns(table)[source]

Return a list of columns in the given table

get_db_structure()[source]

Return a nested list of tables and columns of those tables within the current database

omfit_classes.omfit_rdb.available_efits_from_rdb(scratch, device, shot, default_snap_list=None, format='{tree}', mdsplus_treename=None, **kw)[source]

Retrieves EFIT runids for a given shot from the rdb.

Parameters
  • scratch – dict dictionary where the information is cached (to avoid querying SQL database multiple times)

  • device – string device for which to query list of available EFITs

  • shot – int shot for which to query list of available EFITs

  • default_snap_list – dict dictionary to which list of available efits will be passed

  • format – string format in which to write list of available efits (tree, by, com, drun, runid) are possible options

  • **kw

    quietly accepts and ignores other keywords for compatibility with other similar functions

Returns

(dict, str) dictionary with list of available options formatted as {text:format} information about the discovered EFITs

omfit_classes.omfit_rdb.translate_RDBserver(server, servers={'C-Mod': 'alcdb2', 'CMOD': 'alcdb2', 'D3D': 'd3drdb', 'DIII-D': 'd3drdb', 'DIIID': 'd3drdb', 'EAST': 'east_database', 'atlas.gat.com': 'd3drdb', 'd3dpub': 'huez', 'd3drdb': 'd3drdb', 'east': 'east_database', 'gat': 'd3drdb', 'huez.gat.com': 'huez', 'ignition': 'loki', 'nstx': 'nstxrdb'})[source]

This function maps mnemonic names to real RDB servers

omfit_classes.omfit_rdb.set_rdb_password(server, username=None, password=None, guest=False)[source]

Sets up an encrypted password for OMFIT to use with SQL databases on a specific server :param server: string

The server this credential applies to

Parameters
  • username – string Username on server. If a password is specified, this defaults to os.environ[‘USER’]. If neither username nor password is specified, OMFIT will try to read both from a login file.

  • password – string The password to be encrypted. Set to ‘’ to erase the exting password for username@server, if there is one. If None, OMFIT will attempt to read it from a default login file, like .pgpass. This may or may not be the right password for server.

  • guest – bool Use guest login and save it. Each server has its own, with the default being guest // guest_pwd .

omfit_classes.omfit_rdb.get_rdb_password(server, username=None, password=None, guest=False, guest_fallback_allowed=True)[source]

Returns the RDB username and password for server :param server: string

Servers can have different passwords, so you have to tell us which one you’re after

Parameters
  • username – string Defaults to os.environ[‘USER’]

  • password – string [optional] Override decrypted credential and just return this input instead

  • guest – bool Return guest login for server. Each server has its own, with the default being guest // guest_pwd .

  • guest_fallback_allowed – bool Allowed to return guest credentials if look up fails. Otherwise, raise ValueError

Returns

(string, string) The username and password for server

omfit_reviewplus

class omfit_classes.omfit_reviewplus.OMFITreviewplus(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

load()[source]

omfit_solps

Provides classes and utility functions for managing SOLPS runs or post-processing SOLPS results.

omfit_classes.omfit_solps.find_closest_key(keys, search_string=None, block_id=None, case_sensitive=False)[source]

Decides which string in a list of strings (like a list of dictionary keys) best matches a search string. This is provided so that subtle changes in the spelling, arrangement, or sanitization of headers in input.dat don’t break the interface.

Parameters
  • keys – List of keys which are potential matches to test_key

  • search_string – String to search for; it should be similar to an expected key name in keys.

  • block_id – A string with the block number, like ‘1’ or ‘3a’. If this is provided, the standard approximate block name will be used and search_string is not needed.

  • case_sensitive – T/F: If False, strings will have output of .upper() compared instead of direct comparison.

Returns

A string containing the key name which is the closest match to search_string

class omfit_classes.omfit_solps.OMFITsolps(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

Class for parsing some OMFIT input files

load()[source]
input_dat_renames()[source]
save()[source]

The save method is supposed to be overridden by classes which use OMFITobject as a superclass. If left as it is this method can detect if .filename was changed and if so, makes a copy from the original .filename (saved in the .link attribute) to the new .filename

run(**kw)[source]

Activates a GUI to edit the current file in the OMFIT SOLPS module

Returns

1 if failed or output from relevant SOLPS GUI (probably None)

class omfit_classes.omfit_solps.OMFITsolpsNamelist(*args, **kwargs)[source]

Bases: omfit_classes.omfit_namelist.OMFITnamelist

Special namelist class for SOLPS files.

  • Launches a custom GUI in SOLPS module.

  • Fixes the dimensions of b2.boundary.parameters arrays by setting up collect_arrays() instructions (only if collect_arrays keyword is not already specified)

run(**kw)[source]
class omfit_classes.omfit_solps.OMFITsolpsCase(*args, **kwargs)[source]

Bases: omfit_classes.omfit_base.OMFITtree

Class for holding SOLPS runs

  • inputs: returns a list of file references for building an input deck. Includes current run and common_folder.

  • check_inputs: returns a list of missing files in the input deck

Parameters
  • filename – string

  • label – string

  • common_folder – bool Flag this run as a common folder. It is not a real run. Instead, it holds common files which are shared between the run folders.

  • baserun – bool [deprecated] Old name for common_folder. Used to support loading old projects. Do not use in new development.

  • coupled – bool Order a coupled B2.5 + Eirene run instead of a B2.5 standalone run.

  • debug – bool Activate debug mode

  • key – string [optional] Key used in the OMFIT tree

  • custom_required_files – list of strings [optional] Override the standard required_files list in a manner which will persist across updates to the class. If this option is not used and the default required_files list in the class changes, class instances will update to the new list when they are loaded from saved projects. If the customization is used, even to assign the default list, the customized list will persist even if the default changes.

  • custom_required_files_coupled – list of strings [optional] Similar to custom_required_files, but for additional files used in coupled B2.5 + Eirene runs

  • custom_required_files_continue – list of strings [optional] Similar to custom_required_files, but for additional files used to continue the run after initialization

  • custom_key_outputs – list of strings [optional] Similar to custom_required_files, but for key output files

  • version – string [optional] Used to keep track of the SOLPS code version that should be used to run this case. Should be like ‘SOLPS5.0’

  • kw – dict Additional keywords passed to super class or used to accept restored attributes.

find_cases(quiet=False)[source]

Searches parent item in OMFIT tree to find sibling instances of OMFITsolpsCase and identify them as main or common_folder. This won’t necessarily work the first time or during init because this case (and maybe others) won’t be in the tree yet.

Parameters

quiet – bool Suppress print statements, even debug

inputs(quiet=False, no_warnings=True)[source]

Gathers a set of input files from this run and the common_folder(s). Returns only common_folder inputs if this is a common_folder.

Parameters
  • quiet – bool Suppress printout, even debug

  • no_warnings – bool Suppress warnings about missing files

Returns

list of file references

check_inputs(inputs=None, initial=True, required_files=None, quiet=False)[source]

Checks whether the input deck is complete

Parameters
  • inputs – [Optional] list of references to input files If this is None, self.inputs() will be used to obtain the list.

  • initial – bool Check list vs. initialization requirements instead of continuation requirements. If True, some intermediate files won’t be added to the list. If False, it is assumed that the run is being continued and there will be more required files.

  • required_files – [Optional] list of strings IF this is None, the default self.required_files or self.required_files+required_files_coupled will be used.

  • quiet – bool Suppress print statements, even debug

Returns

list of strings List of missing files. If it is emtpy, then there are no missing files and everything is cool, dude.

read_solps_iter_settings()[source]

Attempts to read SOLPS-ITER specific settings or fills in required values with assumptions These are special settings that are used in setup checks or similar activities :return: None

check_setup(quiet=False)[source]

Check for obvious setup problems :param quiet: bool :return: tuple of four lists of stings

  • Descriptions of setup problems

  • Commands to execute to resolve setup problems (first define r to be a reference to this OMFITsolpsCase)

  • Descriptions of minor setup issues

  • Commands to execute to resolve minor setup issues (with r defined as this OMFITsolpsCase)

check_outputs(quiet=False)[source]

Checks whether key output files are present

Parameters

quiet – bool Suppress print statements, even debug

Returns

list of missing key output files

run(**kw)[source]

Activates the current case in the OMFIT SOLPS module

Returns

1 if failed or output from relevant SOLPS GUI (probably None)

update_status()[source]
get_file(filename)[source]

Finds a file with name matching filename within this case, its common_folder, or subfolders

Assumes all files are parsed such that they will register as instances of OMFITascii

Parameters

filename – string

Returns

file reference or None

omfit_spider

class omfit_classes.omfit_spider.OMFITspider_bonfit(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

OMFIT class used to interface with SPIDER equilibirum bonfit (boundary fit) input file

Parameters
  • filename – filename passed to OMFITascii class

  • **kw – keyword dictionary passed to OMFITascii class

load()[source]
save()[source]

The save method is supposed to be overridden by classes which use OMFITobject as a superclass. If left as it is this method can detect if .filename was changed and if so, makes a copy from the original .filename (saved in the .link attribute) to the new .filename

plot(ax=None)[source]

omfit_testing

This file contains classes and functions for setting up regression test suites with convenient utilities Usage:

  • Subclass a test case from OMFITtest

  • Override any of the default settings you like at the top of the class

  • Write test methods

  • Supply your test case (or a list of OMFITtest-based test cases) to manage_tests() as the first argument

See also: OMFIT-source/regression/test_template.py

class omfit_classes.omfit_testing.OMFITtest(*args, **kwargs)[source]

Bases: unittest.case.TestCase

Test case with some extra methods to help with OMFIT testing tasks

To use this class, make your own class that is a subclass of this one: >> class TestMyStuff(OMFITtest): >> notify_gh_status = True # Example of a test setting you can override In the top of your file, override key test settings as needed

Parameters
  • warning_level

    int Instructions for turning some warnings into exceptions & ignoring others -1: Make no changes to warnings 0: No exceptions from warnings 1: Ignores some math warnings related to NaNs & some which should be

    benign. Exceptions for other warnings.

    2: (RECOMMENDED) Ignores a small set of warnings which are probably

    benign, but exceptions for everything else.

    3: Exceptions for practically every warning. Only ignores some really

    inconsequential ones from OMFIT.

    4: No warnings allowed. Always throw exceptions! The warnings are changed before your test starts, so you can still override or change them in s() or __init__().

  • count_figs – bool Enable counting of figures. Manage using collect_figs(n) after opening n figures. The actual figure count will be compared to the expected count (supplied by you as the argument), resulting in an AssertionError if the count does not match.

  • count_guis – bool Enable counting of GUIs. Manage using collect_guis(n) after opening n GUIs. AssertionError if GUI count does not match expectation given via argument.

  • leave_figs_open – bool Don’t close figures at the end of each test (can lead to clutter)

  • modules_to_load – list of strings or tuples Orders OMFIT to load the modules as indicated. Strings: modules ID. Tuples: (module ID, key)

  • report_table – bool Keep a table of test results to include in final report

  • table_sorting_columns – list of strings Names of columns to use for sorting table. Passed to table’s group_by(). This is most useful if you are adding extra columns to the results table during your test class’s __init__() and overriding tearDown() to populate them.

  • notify_gh_comment – int Turn on automatic report in a GitHub comment 0: off 1: always try to post or edit 2: try to post or edit on failure only 4: edit a comment instead of posting a new one if possible; only post if necessary 8: edit the top comments and append test report or replace existing report with same context 5 = behaviors of 4 and 1 (for example)

  • notify_gh_status – bool Turn on automatic GitHub commit status updates

  • gh_individual_status

    int 0 or False: No individual status contexts.

    Maximum of one status report if notify_gh_status is set.

    1 or True: Every test gets its own status context, including a pending

    status report when the test starts.

    2: Tests get their own status context only if they fail.

    No pending status for individual tests.

    3: Like 2 but ignores notify_gh_status and posts failed status reports

    even if reports are otherwise disabled.

  • gh_overall_status_after_individual – bool Post the overall status even if individual status reports are enabled (set to 1 or True). Otherwise, the individual contexts replace the overall context. The overall status will be posted if individual status reports are set to 2 (only on failure).

  • notify_email – bool Automatically send a test report to the user via email

  • save_stats_to – dict-like Container for catching test statistics

  • stats_key – string [optional] Test statistics are saved to save_stats_to[stats_key]. stats_key will be automatically generated using the subclass name and a timestamp if left as None.

  • topics_skipped – list of strings Provide a list of skipped test topics in order to have them included in the test report. The logic for skipping probably happens in the setup of your subclass, so we can’t easily generate this list here.

  • omfitx – reference to OMFITx Provide a reference to OMFITx to enable GUI counting and closing. This class might not be able to find OMFITx by itself, depending on how it is loaded.

Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.

count_figs = False
count_guis = False
leave_figs_open = False
modules_to_load = []
omfitx = None
report_table = True
table_sorting_columns = ['Result', 'Time']
notify_gh_status = 0
notify_gh_comment = 0
gh_individual_status = 2
gh_overall_status_after_individual = True
notify_email = False
save_stats_to = None
topics_skipped = []
warning_level = 2
verbosity = 1
debug_topic = None
stats = {'already_saw_figures': None, 'fig_count_discrepancy': 0, 'figures_detected': 0, 'figures_expected': 0, 'gui_count_discrepancy': 0, 'guis_detected': 0, 'guis_expected': 0, 'pre_opened_figs': None, 'time_elapsed': {}}
test_timing_t0 = 0
test_timing_t1 = 0
set_gh_status_keywords = {}
force_gh_status = None
stats_key = None
assertRaisesSimilar(similar_exc, *args, **kwargs)[source]

Assert that some code raises an exception similar to the one provided.

The purpose is to bypass the way OMFIT’s .importCode() will provide a new reference to an exception that differs from the exception received by a script that does from OMFITlib_whatever import Thing

default_attr(attr, value=None)[source]

Sets an attribute to a default value if it is not already set

Parameters
  • attr – string or dict string: the name of the attribute to set dict: call self once for each key/value pair, using keys for attr and values for value

  • value – object The value to assign to the attribute

collect_guis(count=0)[source]

Counts and then closes GUIs

Parameters

count – int Number of GUIs expected since last call. Actual and expected counts will be accumulated in self.stats.

collect_figs(count=0)[source]

Counts and/or closes figures

Parameters

count – int Number of figures expected since last call. Actual and expected counts will be accumulated in self.stats.

setUp()[source]

Hook method for setting up the test fixture before exercising it.

tearDown()[source]

Hook method for deconstructing the test fixture after testing it.

get_context()[source]

Sanitize test id() so it doesn’t start with omfit_classes.omfit_python or who knows what –> define context

classmethod printv(*args, **kw)[source]
printdq(*args)[source]
classmethod tearDownClass()[source]

Hook method for deconstructing the class fixture after running all tests in the class.

omfit_classes.omfit_testing.manage_tests(tests, failfast=False, separate=True, combined_context_name=None, force_gh_status=None, force_gh_comment=None, force_email=None, force_warning_level=None, print_report=True, there_can_be_only_one=True, raise_if_errors=True, max_table_width=- 1, set_gh_status_keywords=None, post_comment_to_github_keywords=None, ut_verbosity=1, ut_stream=None, only_these=None, **kw)[source]

Utility for running a set of OMFITtest-based test suites Example usage: >> class TestOmfitThingy(OMFITtest): >> def test_thingy_init(self): >> assert 1 != 0, ‘1 should not be 0’ >> manage_tests(TestOmfitThingy)

Parameters
  • tests – OMFITtest instance or list of OMFITtest instances Define tests to run

  • failfast – bool Passed straight to unittest. Causes execution to stop at the first error instead of running all tests and reporting which pass/fail.

  • separate – bool Run each test suite separately and give it a separate context. Otherwise they’ll have a single combined context.

  • combined_context_name – string [optional] If not separate, override the automatic name (‘+’.join([test.__class__.__name__) for test in tests]))

  • force_gh_status – bool [optional] If None, use GitHub status post settings from the items in tests. If True or False: force status updates on/off.

  • force_gh_comment – bool or int [optional] Like force_gh_status, but for comments, and with extra options: Set to 2 to post comments only on failure.

  • force_email – bool [optional] None: notify_email on/off defined by test. True or False: force email notifications to be on or off.

  • force_warning_level – int [optional] None: warning_level defined by test. int: Force the warning level for all tests to be this value.

  • print_report – bool Print to console / command line?

  • there_can_be_only_one

    True, None or castable as int This value is interpreted as a set of binary flags. So 6 should be interpreted as options 2 and 4 active.

    A value of True is converted into 255 (all the bits are True, including unused bits). A value of None is replaced by the default value, which is True or 255. A float or string will work if it can be converted safely by int().

    1: Any of the flags will activate this feature. The 1 bit has no special meaning beyond activation.

    If active, old github comments will be deleted. The newest report may be retained.

    2: Limit deletion to reports that match the combined context of the test being run. 4: Only protect the latest comment if it reports a failure; if the last test passed, all comments may be deleted 8: Limit scope to comments with matching username

  • raise_if_errors – bool Raise an OMFITexception at the end if there were any errors

  • max_table_width – int Width in columns for the results table, if applicable. If too small, some columns will be hidden. Set to -1 to allow the table to be any width.

  • set_gh_status_keywords – dict [optional] Dictionary of keywords to pass to set_gh_status()

  • post_comment_to_github_keywords – dict [optional] Dictionary of keywords to pass to post_comment_to_github(), like thread, org, repository, and token

  • ut_verbosity – int Verbosity level for unittest. 1 is normal, 0 suppresses ., E, and F reports from unittest as it runs.

  • ut_stream – Output stream for unittest, such as StingIO() or sys.stdout

  • only_these – string or list of strings Names of test units to run (with or without leading test_). Other test units will not be run. (None to run all tests)

  • kw – quietly ignores other keywords

Returns

tuple containing: list of unittest results astropy.table.Table instance containing test results string reporting test results, including a formatted version of the table

class omfit_classes.omfit_testing.setup_warnings(level=2, record=False, module=None)[source]

Bases: object

A context manager like catch_warnings, that copies and restores the warnings filter upon exiting the context, with preset levels of warnings that turn some warnings into exceptions.

Parameters
  • record – specifies whether warnings should be captured by a custom implementation of warnings.showwarning() and be appended to a list returned by the context manager. Otherwise None is returned by the context manager. The objects appended to the list are arguments whose attributes mirror the arguments to showwarning().

  • module – to specify an alternative module to the module named ‘warnings’ and imported under that name. This argument is only useful when testing the warnings module itself.

  • level

    (int) Controls how many warnings should throw errors

    -1: Do nothing at all and return immediately

    0: No warnings are promoted to exceptions. Specific warnings defined in

    higher levels are ignored and the rest appear as warnings, but with ‘always’ instead of ‘default’ behavior: they won’t disappear after the first instance.

    All higher warning levels turn all warnings into exceptions and then selectively ignore some of them:

    1: Ignores everything listed in level 2, but also ignores many common

    math errors that produce NaN.

    2: RECOMMENDED: In addition to level 3, also ignores several warnings

    of low-importance, but still leaves many math warnings (divide by 0) as errors.

    3: Ignores warnings which are truly irrelevant to almost any normal

    regression testing, such as the warning about not being able to make backup copies of scripts that are loaded in developer mode. Should be about as brutal as level 4 during the actual tests, but somewhat more convenient while debugging afterward.

    4: No warnings are ignored. This will be really annoying and not useful

    for many OMFIT applications.

Specify whether to record warnings and if an alternative module should be used other than sys.modules[‘warnings’].

For compatibility with Python 3.0, please consider all arguments to be keyword-only.

omfit_classes.omfit_testing.get_server_name(hostname='atom')[source]

Returns the hostname. For known servers, it sanitizes hostname; e.g. irisd.cluster –> iris.gat.com

omfit_classes.omfit_testing.clear_old_test_report_comments(lvl=15, keyword='<!--This comment was automatically generated by OMFIT and was not posted directly by a human. A2ZZJfxk2910x2AZZf -->', contexts=None, remove_all=False, **kw)[source]

Removes old automatic test reports :param lvl: int

Interpreted as a set of binary flags, so 7 means options 1, 2, and 4 are active. 1: Actually execute the deletion commands instead of just testing. In test mode, it returns list of dicts

containing information about comments that would be deleted.

2: Limit scope to current context (must supply contexts) and do not delete automatic comments from other context 4: Do not preserve the most recent report unless it describes a failure 8: Only target comments with matching username

Parameters
  • keyword – string [optional] The marker for deletion. Comments containing this string are gathered. The one with the latest timestamp is removed from the list. The rest are deleted. Defaults to the standard string used to mark automatic comments.

  • contexts – string or list of strings [optional] Context(s) for tests to consider. Relevant only when scope is limited to present context.

  • remove_all – bool Special case: don’t exclude the latest comment from deletion because its status was already resolved. This comes up when the test would’ve posted a comment and then immediately deleted it and just skips posting. In this case, the actual last comment is not really the last comment that would’ve existed had we not skipped posting, so don’t protect it.

  • **kw

    optional keywords passed to delete_matching_gh_comments: thread: int [optional]

    Thread#, like pull request or issue number. Will be looked up automatically if you supply None and the current branch has an open pull request.

    token: string [optional]

    Token for accessing Github. Will be defined automatically if you supply None and you have previously stored a token using set_OMFIT_GitHub_token()

    org: string [optional]

    Organization on github, like ‘gafusion’. Will be looked up automatically if you supply None and the current branch has an open pull request.

    repository: string [optional]

    Repository on github within the org Will be looked up automatically if you supply None and the current branch has an open pull request.

Returns

list List of responses from github API requests, one entry per comment that the function attempts to delete. If the test keyword is set, this will be converted into a list of dicts with information about the comments that would be targeted.

omfit_classes.omfit_testing.run_test_outside_framework(test_script, catch_exp_reports=True)[source]

Deploys a test script outside the framework. Imports will be different.

To include this in a test unit, do >>> return_code, log_tail = run_test_outside_framework(__file__)

Parameters
  • test_script – string Path to the file you want to test. If a test has a unit to test itself outside the framework, then this should be __file__. Also make sure a test can’t run itself this way if it’s already outside the framework.

  • catch_exp_reports – bool Try to grab the end of the log starting with exception reports. Only works if test_script merges stdout and stderr; otherwise the exception reports will be somewhere else. You can use with RedirectedStdStreams(stderr=sys.stdout): in your code to do the merging.

Returns

(int, string) Return code (0 is success) End of output

class omfit_classes.omfit_testing.RedirectStdStreams(stdout=None, stderr=None)[source]

Bases: object

Redirects stdout and stderr streams so you can merge them and get an easier to read log from a test.

omfit_tglf

class omfit_classes.omfit_tglf.OMFITtglfEPinput(*args, **kwargs)[source]

Bases: omfit_classes.omfit_ascii.OMFITascii, omfit_classes.sortedDict.SortedDict

Class used to read/write TGLFEP input files

load()[source]
save()[source]

The save method is supposed to be overridden by classes which use OMFITobject as a superclass. If left as it is this method can detect if .filename was changed and if so, makes a copy from the original .filename (saved in the .link attribute) to the new .filename

class omfit_classes.omfit_tglf.OMFITalphaInput(*args, **kwargs)[source]

Bases: omfit_classes.omfit_ascii.OMFITascii, omfit_classes.sortedDict.SortedDict

load()[source]
class omfit_classes.omfit_tglf.OMFITtglf_eig_spectrum(filename, ky_file, nmodes, **kw)[source]

Bases: omfit_classes.sortedDict.OMFITdataset, omfit_classes.omfit_ascii.OMFITascii

Parameters
  • data_vars – see xarray.Dataset

  • coords – see xarray.Dataset

  • attrs – see xarray.Dataset

load()[source]
plot(axs=None)[source]

Plot the eigenvalue spectrum as growth rate and frequency

Parameters

axs – A length 2 sequence of matplotlib axes (growth rate, frequency)

class omfit_classes.omfit_tglf.OMFITtglf_wavefunction(*args, **kwargs)[source]

Bases: omfit_classes.omfit_ascii.OMFITascii, omfit_classes.sortedDict.SortedDict

load()[source]
plot()[source]
class omfit_classes.omfit_tglf.OMFITtglf_flux_spectrum(filename, ky_file, field_labels, spec_labels, **kw)[source]

Bases: omfit_classes.sortedDict.OMFITdataset, omfit_classes.omfit_ascii.OMFITascii

Parse the out.tglf.sum_flux_spectrum file and provide a convenient means for plotting it.

Parameters
  • filename – Path to the out.tglf.sum_flux_spectrum file

  • n_species – Number of species

  • n_fields – Number of fields

  • data_vars – see xarray.Dataset

  • coords – see xarray.Dataset

  • attrs – see xarray.Dataset

load()[source]
plot(fn=None)[source]

Plot the flux spectra

Parameters

fn – A FigureNotebook instance

class omfit_classes.omfit_tglf.OMFITtglf_nete_crossphase_spectrum(filename, ky_file, nmodes, **kw)[source]

Bases: omfit_classes.sortedDict.OMFITdataset, omfit_classes.omfit_ascii.OMFITascii

Parse the out.tglf.nete_crossphase_spectrum file and provide a convenient means for plotting it.

Parameters
  • filename – Path to the out.tglf.nete_crossphase_spectrum file

  • nmodes – Number of modes computed by TGLF and used in the

  • data_vars – see xarray.Dataset

  • coords – see xarray.Dataset

  • attrs – see xarray.Dataset

load()[source]
plot(ax=None)[source]

Plot the nete crossphase spectrum

Parameters

ax – A matplotlib axes instance

class omfit_classes.omfit_tglf.OMFITtglf_potential_spectrum(filename, ky_file, nmodes, **kw)[source]

Bases: omfit_classes.sortedDict.OMFITdataset, omfit_classes.omfit_ascii.OMFITascii

Parse the potential fluctuation spectrum in out.tglf.potential_spectrum and provide a convenient means for plotting it.

Parameters
  • filename – Path to the out.tglf.potential_spectrum file

  • data_vars – see xarray.Dataset

  • coords – see xarray.Dataset

  • attrs – see xarray.Dataset

load()[source]
plot(fn=None)[source]

Plot the fields

Parameters

fn – A FigureNotebook instance

class omfit_classes.omfit_tglf.OMFITtglf_fluct_spectrum(filename, ky_file, ns, label, spec_labels, **kw)[source]

Bases: omfit_classes.sortedDict.OMFITdataset, omfit_classes.omfit_ascii.OMFITascii

Parse the {density,temperature} fluctuation spectrum in out.tglf.{density,temperature}_spectrum and provide a convenient means for plotting it.

Parameters
  • filename – Path to the out.tglf.{density,temperature}_spectrum file

  • ns – Number of species

  • label – Type of fluctuations (‘density’ or ‘temperature’)

  • data_vars – see xarray.Dataset

  • coords – see xarray.Dataset

  • attrs – see xarray.Dataset

pretty_names = {'density': '\\delta n ', 'temperature': '\\delta T '}
load()[source]
plot(axs=None)[source]

Plot the fluctuation spectrum

Parameters

axs – A list of matplotlib axes of length self.ns

class omfit_classes.omfit_tglf.OMFITtglf_intensity_spectrum(filename, ky_file, nmodes, ns, spec_labels, **kw)[source]

Bases: omfit_classes.sortedDict.OMFITdataset, omfit_classes.omfit_ascii.OMFITascii

Parse the intensity fluctuation spectrum in out.tglf.{density,temperature}_spectrum and provide a convenient means for plotting it.

Parameters
  • filename – Path to the out.tglf.{density,temperature}_spectrum file

  • ns – Number of species

  • label – Type of fluctuations (‘density’ or ‘temperature’)

  • data_vars – see xarray.Dataset

  • coords – see xarray.Dataset

  • attrs – see xarray.Dataset

load()[source]
class omfit_classes.omfit_tglf.OMFITtglf(*args, **kwargs)[source]

Bases: omfit_classes.omfit_dir.OMFITdir

The purpose of this class is to be able to store all results from a given TGLF run in its native format, but parsing the important parts into the tree

Parameters

filename – Path to TGLF run

plot()[source]
saturation_rule(saturation_rule_name)[source]
class omfit_classes.omfit_tglf.OMFITtglf_nsts_crossphase_spectrum(filename, ky_file, nmodes, spec_labels, **kw)[source]

Bases: omfit_classes.sortedDict.OMFITdataset, omfit_classes.omfit_ascii.OMFITascii

Parse the out.tglf.nsts_crossphase_spectrum file and provide a convenient means for plotting it.

Parameters
  • filename – Path to the out.tglf.nsts_crossphase_spectrum file

  • nmodes – Number of modes computed by TGLF and used in the

  • data_vars – see xarray.Dataset

  • coords – see xarray.Dataset

  • attrs – see xarray.Dataset

load()[source]
plot(axs=None)[source]

Plot the nsts crossphase spectrum

Parameters

axs – A sequence of matplotlib axes instances of length len(self[‘species’])

omfit_classes.omfit_tglf.sum_ky_spectrum(sat_rule_in, ky_spect, gp, ave_p0, R_unit, kx0_e, potential, particle_QL, energy_QL, toroidal_stress_QL, parallel_stress_QL, exchange_QL, etg_fact=1.25, c0=32.48, c1=0.534, exp1=1.547, cx_cy=0.56, alpha_x=1.15, **kw)[source]

Perform the sum over ky spectrum The inputs to this function should be already weighted by the intensity function

nk –> number of elements in ky spectrum nm –> number of modes ns –> number of species nf –> number of fields (1: electrostatic, 2: electromagnetic parallel, 3:electromagnetic perpendicular)

Parameters
  • sat_rule_in

  • ky_spect – k_y spectrum [nk]

  • gp – growth rates [nk, nm]

  • ave_p0 – scalar average pressure

  • R_unit – scalar normalized major radius

  • kx0_e – spectral shift of the radial wavenumber due to VEXB_SHEAR [nk]

  • potential – input potential fluctuation spectrum [nk, nm]

  • particle_QL – input particle fluctuation spectrum [nk, nm, ns, nf]

  • energy_QL – input energy fluctuation spectrum [nk, nm, ns, nf]

  • toroidal_stress_QL – input toroidal_stress fluctuation spectrum [nk, nm, ns, nf]

  • parallel_stress_QL – input parallel_stress fluctuation spectrum [nk, nm, ns, nf]

  • exchange_QL – input exchange fluctuation spectrum [nk, nm, ns, nf]

  • etg_fact – scalar TGLF SAT0 calibration coefficient [1.25]

  • c0 – scalar TGLF SAT0 calibration coefficient [32.48]

  • c1 – scalar TGLF SAT0 calibration coefficient [0.534]

  • exp1 – scalar TGLF SAT0 calibration coefficient [1.547]

  • cx_cy – scalar TGLF SAT0 calibration coefficient [0.56] (from TGLF 2008 POP Eq.13)

  • alpha_x – scalar TGLF SAT0 calibration coefficient [1.15] (from TGLF 2008 POP Eq.13)

  • **kw – any additional argument should follow the naming convention of the TGLF_inputs

Returns

dictionary with summations over ky spectrum: * particle_flux_integral: [nm, ns, nf] * energy_flux_integral: [nm, ns, nf] * toroidal_stresses_integral: [nm, ns, nf] * parallel_stresses_integral: [nm, ns, nf] * exchange_flux_integral: [nm, ns, nf]

omfit_thomson

class omfit_classes.omfit_thomson.OMFITthomson(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict

Helps with fetching data from the Thomson Scattering diagnostic.

It also has some other handy features to support analysis based on Thomson data:

  • Filter data by fractional error (e.g. throw out anything with > 10% uncertainty)

  • Filter data by reduced chi squared (e.g. throw out anything with redchisq > 8)

  • Filter data using ELM timing (e.g. only accept data if they are sampled between 50 and 99% of their local inter-ELM period)

Parameters
  • device – Device name, like ‘DIII-D’

  • shot – Shot number to analyze.

  • efitid – String describing the EFIT to use for mapping, such as ‘EFIT01’. For DIII-D, ‘EFIT03’ and ‘EFIT04’ are recommended because they are calculated on the same time base as TS.S

  • revision_num – A string specifying a revision like ‘REVISION00’ or just the number like 0. -1 Selects the “blessed” or “best” revision automatically. -2 selects the best revision the same as in -1, but also creates a folder with raw data from all revisions.

  • subsystems – List of Thomson systems to do handle. For DIII-D, this can be any subset of [‘core’, ‘divertor’, ‘tangential’]. For other devices, this setting does nothing and the systems list will be forced to [‘core’]. Set this to ‘auto_select’ to pick a setting that’s a good idea for your device. (Currently, all non-DIII-D devices are forced to [‘core’].)

  • override_default_measurements – list of strings [optional] Use this to do lightweight gathering of only a few quantities. More advanced uses, like filtering, require all of the default quantities.

  • quality_filters

    Set to ‘default’ or a dictionary structure specifying settings for quality filters. Missing settings will be set to default values (so an empty dictionary {} is a valid input here). Top level settings in quality_filters:

    • remove_bad_slices: Any time-slice which is all bad measurements or any chord which is all bad will be identified. These can be removed from the final dataset, which saves the user from carting around bad data.

    • set_bad_points_to_zero: Multiply data by the okay flag, which will set all points marked as bad to 0+/-0

    • ip_thresh_frac_of_max: Set a threshold on Ip so that slices with low Ip (such as at the start of the shot or during rampdown) will not pass the filter.

    In addition, there are filters specified on a subsystem-by-subsystem basis. In addition to the real subsystems, there is a ‘global_override’ subsystem, which takes precedence if its settings aren’t None.

    • bad_chords: array or list of bad chord indices for this subsystem. Set to empty list if no bad channels. (Real subsystems only, no global override)

    • redchisq_limit: A number specifying the maximum acceptable reduced chi squared value. This refers to the fit to Thomson’s raw pulsed and DC data signals to determine Te and ne.

    • frac_temp_err_hot_max: Upper limit on acceptable fractional uncertainty in Te when Te is above the hot_cold_boundary threshold.

    • frac_temp_err_cold_max: Upper limit on acceptable fractional uncertainty in Te when Te is below the hot_cold_boundary threshold.

    • hot_cold_boundary: Te boundary between “hot” and “cold” temperatures, which have different fractional uncertainty limits.

    • frac_dens_err_max: Maximum fractional uncertainty in ne measurements.

  • elm_filter – Provide an instance of an ELM filtering class like OMFITelm or set to None to have OMFITthomson set this up automatically.

  • efit_data – This is usually None, which instructs OMFITthomson to gather its own EFIT data to use in mapping. However, you can pass in a dictionary with contents matching the format returned by self.gather_efit_data() and then self.gather_efit_data() will be skipped.

  • allow_backup_efitid – T/F: Allow self.gather_efit_data() to choose self.efitid if it fails to find data for the requested EFIT.

  • debug – bool Debug mode saves some intermediate results in a special dictionary.

  • verbose – bool Always print debugging statements. May be useful when using this class outside the framework.

printdq(*args)[source]

Wrapper for controlling verbosity locally

lookup_latest_shot()[source]

Looks up the last shot. Works for DIII-D. Used automatically if shot <= 0, in which case the shot number is treated as relative to the last shot.

report_status()[source]

Prints out a report that can be read easily from the command line

find_all_revisions(verbose=True)[source]

Looks up all extant revisions of DIII-D Thomson data for the current shot.

check_filtered_data()[source]

Check that results are available and that they match shot and filter settings

check_efit_info()[source]

Checks for consistency of any currently loaded EFIT data

Returns

Flag indicating whether new EFIT data are needed (T) or not (F).

to_dataset()[source]

Packs data into a list of xarray Dataset instances, one per subsystem

Returns

dictionary with one Dataset instance for each subsystem

gather(verbose=True)[source]

Gathers Thomson scattering data from MDSplus

gather_efit_info()[source]

Gathers basic EFIT data for use in mapping.

map(note='', remove_efits_with_large_changes=False)[source]

Maps Thomson to the EFIT. Because most of the TS data are at the same radial location, the interpolation of psi(r,z) onto R,Z for Thomson is sped up by first interpolating to the R for most Thomson, then interpolating along the resulting 1D psi(z). If there is more than one R value (such as if tangential TS is included), the program loops over each unique R value. This could be done with one 2D interpolation, but I think it would be slower.

Parameters
  • note – Prints a note when starting mapping.

  • remove_efits_with_large_changes – Filter out EFIT time slices where the axis or boundary value of un-normalized psi changes too fast. It’s supposed to trim questionable EFITs from the end, but it doesn’t seem to keep them all out with reasonable thresholds. This feature was a nice idea and I think it’s coded properly, but it isn’t performing at the expected level, so either leave it off or tune it up better.

filter()[source]

Data quality and ELM filter

ELM phase and timing data are used to select slices. Individual data are flagged to indicate whether they passed the filter. If any chords are completely bad, then they are just removed from the output.

select_time_window(t0, dt=25.0, systems='all', parameters=['temp', 'density'], psi_n_range=[0, 1.5], strict=None, use_shifted_psi=True, alt_x_path=None, comment=None, perturbation=None, realtime=False)[source]

Grabs Thomson data for the time window [t0-dt, d0+dt] for the sub-systems and parameters specified.

Parameters
  • t0 – Center of the time window in ms

  • dt – Half-width of the time window in ms

  • systems – Thomson sub-systems to gather (like [‘core’, ‘divertor’, ‘tangential’]). If None: detect which systems are available.

  • parameters – Parameters to gather (like [‘temp’, ‘density’, ‘press’])

  • psi_n_range – Range of psi_N values to accept

  • strict – Ignored (function accepts this keyword so it can be called generically with same keywords as its counterpart in the quickCER module)

  • use_shifted_psi – T/F attempt to look up corrected psi_N (for profile alignment) in alt_x_path.

  • alt_x_path – An alternative path for gathering psi_N. This can be an OMFIT tree or a string which will give an OMFITtree when operated on with eval(). Use this to provide corrected psi_N values after doing profile alignment. Input should be an OMFIT tree containing trees for all the sub systems being considered in this call to select_time_window (‘core’, ‘tangential’, etc.). Each subsystem tree should contain an array of corrected psi_N values named ‘psin_corrected’.

  • comment – Optional: you can provide a string and your comment will be announced at the start of execution.

  • perturbation

    None or False for no perturbation or a dictionary with instructions for perturbing data for doing uncertainty quantification studies such as Monte Carlo trials. The dictionary can have these keys:

    • random: T/F (default T): to scale perturbations by normally distributed random numbers

    • sigma: float (default 1): specifies scale of noise in standard deviations. Technically, I think you could get away with passing in an array of the correct length instead of a scalar.

    • step_size: float: specifies absolute value of perturbation in data units (overrides sigma if present)

    • channel_mask: specifies which channels get noise (dictionary with a key for each parameter to mask containing a list of channels or list of T/F matching len of channels_used for that parameter)

    • time_mask: specifies which time slices get noise (dictionary with a key for each parameter to mask containing a list of times or list of T/F matching len of time_slices_used for that parameter)

    • data_mask: specifies which points get noise (overrides channel_mask and time_mask if present) (dictionary with a key for each parameter to mask containing a list of T/F matching len of data for that parameter. Note: this is harder to define than the channel and time lists.)

    Shortcut: supply True instead of a dictionary to add 1 sigma random noise to all data.

  • realtime – T/F: Gather realtime data instead of post-shot analysis results.

Returns

A dictionary containing all the parameters requested. Each parameter is given in a dictionary containing x, y, e, and other information. x, y, and e are sorted by psi_N.

calculate_derived(zeff=2.0, ti_over_te=1.0, zi=1.0, zb=6.0, mi=2.0, mb=12.0, use_uncert=False)[source]

Calculate simple derived quantities.

This is limited to quantities which are insensitive to ion measurements and can be reasonably estimated from electron data only with limited assumptions about ions.

Assumptions:

Parameters
  • zeff – float Assumed effective charge state of ions in the plasma: Z_eff=(n_i * Z_i^2 + n_b * Z_b^2) / (n_i * Z_i + n_b * Z_b)

  • ti_over_te – float or numeric array matching shape of te Assumed ratio of ion to electron temperature

  • zi – float or numeric array matching shape of ne Charge state of main ions. Hydrogen/deuterium ions = 1.0

  • mi – float or numeric array matching shape of te Mass of main ions in amu. Deuterium = 2.0

  • zb – float or numeric array matching shape of ne Charge state of dominant impurity. Fully stripped carbon = 6.0

  • mb – float or numeric array matching shape of ne Mass of dominat impurity ions in amu. Carbon = 12.0

  • use_uncert – bool Propagate uncertainties (takes longer)

static lnlambda(te, ne, debye)[source]

Calculate the Coulomb logarithm for electrons ln(Lambda)

Parameters
  • te – array Electron temperature in eV

  • ne – array matching dimensions of te Electron density in m^-3

  • debye – array Debye length in m

Returns

lnLambda, lnLambda_e

static resistivity(te, zeff, ln_lambda)[source]

Calculate Spitzer transverse resistivity using NRL Plasma Formulary 2009 Page 29

Parameters
  • te – float or array Electron temperature in eV

  • zeff – float or array matching dimensions of te Effective ion charge state for collisions with electrons

  • ln_lambda – float or array matching dimensions of te Coulomb logarithm

Returns

array matching dimensions of te Resistivity in Ohm*m

static taue(te, ne, ln_lambda_e, m_b=2.0, z_b=1.0)[source]

Calculates the spitzer slowing down time using equation 5 of W. Heidbrink’s 1991 DIII-D physics memo

Parameters
  • te – An array of T_e values in eV

  • ne – Array of n_e values in m^-3

  • ln_lambda_e – Coulomb logarithm for electron-electron collisions

  • m_b – Atomic mass of beam species (normally deuterium) in atomic mass units

  • z_b – Atomic number of beam species (normally deuterium) in elementary charges

Returns

Fast ion slowing down time in ms.

static data_filter(*args, **kwargs)[source]

Removes bad values from arrays to avoid math errors, for use when calculating Thomson derived quantities

Parameters
  • args – list of items to sanitize

  • kwargs – Keywords - okay: array matching dimensions of items in args: Flag indicating whether each element in args is okay - bad_fill_value: float: Value to use to replace bad elements

Returns

list of sanitized items from args, followed by bad

plot(fig=None, axs=None)[source]

Launches default plot, which is normally self.elm_detection_plot(); can be changed by setting self.default_plot.

Returns

(Figure instance, array of Axes instances) Tuple containing references to figure and axes used by the plot

static setup_axes(fig, nrow=1, ncol=1, squeeze=True, sharex='none', sharey='none')[source]

Utility: add grid of axes to existing figure

Parameters
  • fig – Figure instance

  • nrow – int Number of rows

  • ncol – int Number of columns

  • squeeze – bool Squeeze output to reduce dimensions. Otherwise, output will be a 2D array

  • sharex – string Describe how X axis ranges should be shared/linked. Pick from: ‘row’, ‘col’, ‘all’, or ‘none’

  • sharey – string Describe how Y axis ranges should be shared/linked. Pick from: ‘row’, ‘col’, ‘all’, or ‘none’

Returns

Axes instance or array of axes instances

profile_plot(t=None, dt=25.0, position_type='psi', params=['temp', 'density'], systems='all', unit_convert=True, fig=None, axs=None)[source]

Plots profiles of physics quantities vs. spatial position for a selected time window

Parameters
  • t – float Center of time window in ms

  • dt – float Half-width of time window in ms. All data between t-dt and t+dt will be plotted.

  • position_type – string Name of X coordinate. Valid options: ‘R’, ‘Z’, ‘PSI’

  • params – list of strings List physics quantities to plot. Valid options are temp, density, and press

  • systems – list of strings or ‘all’ List subsystems to include in the plot. Choose all to use self.subsystems.

  • unit_convert – bool Convert units from eV to keV, etc. so most quantities will be closer to order 1 in the core.

  • fig – Figure instance Plot will be drawn using existing figure instance if one is provided. Otherwise, a new figure will be made.

  • axs – 1D array of Axes instances matching length of params. Plots will be drawn using existing Axes instances if they are provided. Otherwise, new axes will be added to fig.

Returns

Figure instance, 1D array of Axes instances

contour_plot(position_type='psi', params=['temp', 'density'], unit_convert=True, combine_data_before_contouring=False, num_color_levels=41, fig=None, axs=None)[source]

Plots contours of a physics quantity vs. time and space

Parameters
  • position_type – string Select position from ‘R’, ‘Z’, or ‘PSI’

  • params – list of strings Select parameters from ‘temp’, ‘density’, ‘press’, or ‘redchisq’

  • unit_convert – bool Convert units from e.g. eV to keV to try to make most quantities closer to order 1

  • combine_data_before_contouring – bool Combine data into a single array before calling tricontourf. This may look smoother, but it can hide the way arrays from different subsystems are stitched together

  • num_color_levels – int Number of contour levels

  • fig – Figure instance Provide a Matplotlib Figure instance and an appropriately dimensioned array of Axes instances to overlay

  • axs – array of Axes instances Provide a Matplotlib Figure instance and an appropriately dimensioned array of Axes instances to overlay

Returns

Figure instance, array of Axes instances Returns references to figure and axes used in plot

omfit_timcon

class omfit_classes.omfit_timcon.OMFITtimcon(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

OMFIT class for parsing/writing the NBI time traces of the DIII-D PCS

load()[source]
save()[source]

The save method is supposed to be overridden by classes which use OMFITobject as a superclass. If left as it is this method can detect if .filename was changed and if so, makes a copy from the original .filename (saved in the .link attribute) to the new .filename

waveform(beam, x, y=None, phases=None, duty_cycle_1=False)[source]
plot_waveform(beam, ax=None, show_legend=True)[source]
plot()[source]

omfit_toksearch

class omfit_classes.omfit_toksearch.OMFITtoksearch(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict

This class is used to query from database through tokesearch API

Parameters
  • serverPicker – (string)A string designating the server to create the toksearch query on.

  • shots – A list of shot numbers (ints) to be fetched

  • signals – A dict where each key corresponds to the signal name returned by toksearch, and each entry is a list which corresponds to a signal object fetched by toksearch. The first element of the list is the string corresponding to each signal name, i.e. ‘PtDataSignal’, the 2nd and 3rd entries are the args (list), and keyword args (dict) respectively. Ex) [‘PtDataSignal’,[‘ip’],{}] corresponds to a fetch for PtData ‘ip’ signal.

  • datasets – A dict representing xarray datasets to be created from fetched signals.

  • aligners – A dict where the keys are name of the dataset to align and the entries are a corresponding list of Aligners

  • functions – A list of functions or executable strings to be executed in the toksearch mapping stage

  • where – (string) An evaluatable string (executed in namespace of record) which should return a boolean when the record should be returned by toksearch query. This shall be used when trying to filter out shots by certain criteria. i.e. return False when you wish to filter out a shot, when string evaluates to True.

  • keep – A list of strings pertaining to which attributes (signal,dataset etc.) of each record to be returned by toksearch query default: returns all attrs in namespace record

  • compute_type – (string) Type of method to be used to run the pipeline. Options: ‘serial’,’spark’,’ray’ compute_type=’ray’ gives better memory usage and parallelization

  • return_data – (string) A string pertaining on how the data fetched by toksearch should be structured. Options: ‘by_shot’,’by_signal’. ‘by_shot’ will return a dictionary with shots as keys, with each record namespace stored in each entry. ‘by_signal’ will return a dictionary organized with the union of all record attrs as keys, and a dictionary organized by shot numbers under it. default: ‘by_shot’ NOTE: When fetching ‘by_signal’ Datasets will concatinated together over all valid shots.

  • warn – (bool) If flag is true, the user will be warned if they are about to pull back more than 50% of their available memory and can respond accordingly. This is a safety precaution when pulling back large datasets that may cause you to run out of memory. (default: True).

  • use_dask – (bool) If flag is True then created datasets will be loaded using dask. Loading with dasks reduces the amount of RAM used by saving the data to disk and only loading into memory by chunks. (default: False)

  • load_data – (bool) If this flag is False, then data will be transferred to disk under OMFIT current working directory, but the data will not be loaded into memory (RAM) and thus the OMFITtree will not be updated. This is to be used when fetching data too large to fit into memory. (default True).

  • **compute_kwargs

    keyword arguments to be passed into the toksearch compute functions

load()[source]
omfit_classes.omfit_toksearch.TKS_MdsSignal(*args, **kw)[source]
omfit_classes.omfit_toksearch.TKS_PtDataSignal(*args, **kw)[source]
omfit_classes.omfit_toksearch.TKS_Aligner(align_with, **kw)[source]

Function that takes in a signal name, and keyword arguments and puts them in correct format that the toksearch query method expects. The signal specified is the one that the dataset is intended to be aligned with.

Parameters

align_with – A string respresenting the name of the signal name in ‘signals’ that the dataset is to be aligned with respect to.

omfit_classes.omfit_toksearch.TKS_OMFITLoader(*args, **kw)[source]

omfit_toq

class omfit_classes.omfit_toq.OMFITtoqProfiles(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

TOQ profiles data files

load()[source]
plot()[source]
class omfit_classes.omfit_toq.OMFITdskeqdata(*args, **kwargs)[source]

Bases: omfit_classes.omfit_gato.OMFITdskgato

OMFIT class used to interface to equilibria files generated by TOQ (dskeqdata files)

Parameters
  • filename – filename passed to OMFITobject class

  • **kw – keyword dictionary passed to OMFITobject class

load()[source]
save()[source]

The save method is supposed to be overridden by classes which use OMFITobject as a superclass. If left as it is this method can detect if .filename was changed and if so, makes a copy from the original .filename (saved in the .link attribute) to the new .filename

omfit_transp

class omfit_classes.omfit_transp.OMFITtranspNamelist(*args, **kwargs)[source]

Bases: omfit_classes.omfit_namelist.OMFITnamelist

Class used to interface with TRANSP input “namelist” This is necessary because the TRANSP input namelist is not a default format (e.g. with the update_time functionality)

Parameters
  • filename – filename passed to OMFITobject class

  • **kw – keyword dictionary passed to OMFITobject class

load()[source]

Method used to load the content of the file specified in the .filename attribute

Returns

None

save()[source]

Method used to save the content of the object to the file specified in the .filename attribute

Returns

None

property update_time

This attribute returns a SortedDictionary with the update times elements in the TRANSP namelist

class omfit_classes.omfit_transp.OMFITtranspData(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict

Class for plotting and manipulating TRANSP output MDS and CDF files

Initialize data object from a OMFITmdsValue

Parameters
  • transp_output – OMFITmds object or OMFITnc object containing transp run

  • data (str) – Output data label

>> OMFIT[‘NC_d3d’]=OMFITnc(filename) >> OMFITtranspData(OMFIT[‘NC_d3d’],’Q0’) >> OMFITtranspData(OMFIT[‘NC_d3d’],’NE’) >> >> OMFIT[‘MDS_d3d’]=OMFITmds(server=’D3D’, treename=’transp’, shot=’1551960501’) >> OMFITtranspData(OMFIT[‘MDS_d3d’],’Q0’) >> OMFITtranspData(OMFIT[‘MDS_d3d’],’NE’) >> >> OMFIT[‘MDS_iter’]=OMFITmds(server=’transpgrid’, treename=’transp_iter’, shot=’201001676’) >> OMFITtranspData(OMFIT[‘MDS_iter’],’Q0’) >> OMFITtranspData(OMFIT[‘MDS_iter’],’NE’)

plot(axes=None, label='RPLABEL', slice_axis=None, slice_at=[], label_show_slice=True, **kw)[source]

Plot TRANSP data, using default metadata.

If Data is one dimensional, it is plot using the matplotlib plot function. If 2D, the default is to show the data using View2d. If a slice_axis is defined, the slices are shown as line plots. Extra key words are passed to the plot or View2d function used.

Parameters
  • axes (Axes) – Axes in which to make the plots.

  • label (str) – Labels the data in the plot. If ‘LABEL’ or ‘RPLABEL’ these values are taken from the Data.

  • slice_axis (int) – Slices 2D data along the radial (0) or time (1) axis.

  • slice_at (np.ndarray) – Slices made in slice_axis. An empty list plots all available slices.

Returns

Figure

aol(rmnmp=None)

Normalized inverse scale length a/Lx with derivative with respect to midplane minor radius “r”. The equation is aol = -(a/X)dX/dr

Parameters
  • d – OMFITtranspData object from the TRANSP tree.

  • rmnmp – OMFITtranspData “RMNMP”

Returns

set_grid(zone='X', dvol=None)

Interpolate 2D TRANSP data to rho grid zone-boundary or zone-centered values.

Parameters
  • d – OMFITtranspData object from the mds+ TRANSP tree.

  • zone (str) – 'XB' for zone-boundary rho, 'X' for zone-centered. 'V' or 'VB' for volume.

  • dvol – OMFITtranspData object ‘dvol’ from the mds+ TRANSP tree. If None, will be taken from Data’s MDStree.

return: OMFITtranspData object on the specified rho grid

Example: Assuming the root is an OMFIT TRANSP module with a loaded run. >> mvisc = OMFITtranspData(root[‘OUTPUTS’][‘TRANSP_OUTPUT’],’MVISC’) >> mvisc[‘XAXIS’] ‘X’ >> print(mvisc[‘DIM_OF’][0][:3]) [ 0.01 0.03 0.05] >> mviscb = set_grid(mvisc,’XB’) >> mviscb[‘XAXIS’] ‘XB’ >> print(mviscb[‘DIM_OF’][0][:3]) [ 0.02 0.04 0.06]

sint(darea=None)

Surface integrate a TRANSP OMFITmdsValue object. Currently only available for objects from the TRANSP tree.

Parameters
  • d (OMFITtranspData) – OMFITtranspData object from the mds+ TRANSP OUTPUTS.TWO_D tree.

  • darea (OMFITtranspData or None) – OMFITtranspData object ‘darea’ from the mds TRANSP tree. If None, will be taken from Data’s MDStree.

Example:

mds = OMFITmds(‘DIII-D’,’transp’,1633030101) cur = OMFITtranspData(mds,’CUR’) da = OMFITtranspData(mds,’DAREA’) curi = cur.sint(darea=da) print(curi[‘DATA’][0,-1]) pcur = OMFITtranspData(mds,’PCUR’) print(pcur[‘DATA’][0]) -> 1.16626e+06 -> 1.16626e+06

tavg(time=None, avgtime=None)

Time average data

Parameters
  • d – OMFITtranspData object from the TRANSP tree.

  • time – Center time in seconds

  • avgtime – Averaging window (+/-) in seconds

Returns

For 1D input uncertainty uarray of the data time avarge and standard deviation. For 2D input uncertainty uarray of the profile with time avarge and standard deviation.

Example: Assuming data in root[‘OUTPUTS’][‘TRANSP_OUTPUT’]

time = 2.0 avgtime = 0.1 # 1D tr_neutt = OMFITtranspData(root[‘OUTPUTS’][‘TRANSP_OUTPUT’], ‘NEUTT’) tr_neutt_tavg = tr_neutt.tavg(time=time, avgtime=avgtime) tr_neutt.plot() ax = gca() uband([time-avgtime, time+avgtime],np.repeat(tr_neutt_tavg,2),ax=ax) ax.text(time,nominal_values(tr_neutt_tavg)+std_devs(tr_neutt_tavg),

‘{0:0.3g}+/-{1:0.3g}’.format(nominal_values(tr_neutt_tavg),std_devs(tr_neutt_tavg)))

# 2D tr_ne = OMFITtranspData(root[‘OUTPUTS’][‘TRANSP_OUTPUT’],’NE’) tr_ne_tavg = tr_ne.tavg(time=time, avgtime=avgtime) figure() tr_ne.plot(slice_axis=1,slice_at=time) ax = gca() uband(tr_ne[‘DIM_OF’][0][0,:],tr_ne_tavg,ax=ax)

vder(dvol=None)

Derivative with respect to volume for TRANSP variables consistent with TRANSP finite differencing methods.

See Solomon’s unvolint.pro

Parameters
  • d (OMFITtranspData) – OMFITtranspData object from the mds+ TRANSP tree.

  • dvol (OMFITtranspData or None) – OMFITtranspData object ‘dvol’ from the mds TRANSP tree. If None, will be taken from the Data’s MDStree.

Returns

dy/dV OMFITtransData object on zone-centered grid.

Example: Assuming the root is an OMFIT TRANSP module with a loaded run. >> mvisc = OMFITtranspData(root[‘OUTPUTS’][‘TRANSP_OUTPUT’],’MVISC’) >> tvisc = vint(mvisc) >> tvisc[‘DATA’][0,-1] # total viscous torque at first time step in Nm 2.0986965 >> mvisc2 = vder(tvisc) >> np.all(np.isclose(mvisc2[‘DATA’][0,:],mvisc[‘DATA’][0,:])) True

vint(dvol=None)

Volume integrate a TRANSP OMFITmdsValue object. Currently only available for objects from the TRANSP tree.

Parameters
  • d (OMFITtranspData) – OMFITtranspData object from the mds+ TRANSP OUTPUTS.TWO_D tree.

  • dvol (OMFITtranspData or None) – OMFITtranspData object ‘dvol’ from the mds TRANSP tree. If None, will be taken from Data’s MDStree.

Example: Assuming the root is an OMFIT TRANSP module with a loaded run. >> mvisc = OMFITtranspData(root[‘OUTPUTS’][‘TRANSP_OUTPUT’],’MVISC’) >> tvisc = vint(mvisc) >> tvisc[‘DATA’][0,-1] # total viscous torque at first time step in Nm 2.0986965 >> mvisc2 = vder(tvisc) >> np.all(np.isclose(mvisc2[‘DATA’][0,:],mvisc[‘DATA’][0,:])) True

xder()

TRANSP style differentiation in rho.

Parameters

d – OMFITtranspData object from the TRANSP tree.

Returns

dy/drho OMFITtranspData object on the other rho grid.

xdiff()

TRANSP convention of simple finite difference along rho (axis=0), including switching of centered/boundary grids.

Parameters

d – OMFITtranspData object from the mds+ TRANSP tree.

Returns

OMFITtranspData differenced on the other rho grid ('X' vs 'XB')

Example:

>> x = Data([‘x’,’TRANSP’],1470670204) >> dx = xdiff(x) >> print(dx[‘XAXIS’]) XB >> print(dx.y[0,:3]) [ 0.02 0.02 0.02]

xint()

TRANSP style integration in rho. UNVALIDATED.

Parameters

d – OMFITtranspData object from the TRANSP tree.

Returns

dy/drho OMFITtranspData object on the other rho grid.

class omfit_classes.omfit_transp.OMFITtranspMultigraph(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict

Class for unique manipulation/plotting of TRANSP multigraph mdsValues.

Initialize data object from a OMFITmdsValue.

Parameters
  • MDStree (omfit_mds) – OMFITmdsTree object

  • data (str) – Name of multigraph

plot(axes=None, label='LABEL', squeeze=None, **kw)[source]

Plot each data object in the multigraph.

Parameters
  • axes – Axes in which to plot.

  • label – String labeling the data. ‘LABEL’ or ‘RPLABEL’ are taken from TRANSP metadata.

  • squeeze – Bool demanding all plots be made on a single figure. Default is True for 1D data.

All other key word arguments are passed to the individual OMFITtranspData plot functions.

Plot TRANSP data, using default metadata.

If Data is one dimensional, it is plot using the matplotlib plot function. If 2D, the default is to show the data using View2d. If a slice_axis is defined, the slices are shown as line plots. Extra key words are passed to the plot or View2d function used.

Parameters
  • axes (Axes) – Axes in which to make the plots.

  • label (str) – Labels the data in the plot. If ‘LABEL’ or ‘RPLABEL’ these values are taken from the Data.

  • slice_axis (int) – Slices 2D data along the radial (0) or time (1) axis.

  • slice_at (np.ndarray) – Slices made in slice_axis. An empty list plots all available slices.

Returns

Figure

aol(rmnmp=None)

Normalized inverse scale length a/Lx with derivative with respect to midplane minor radius “r”. The equation is aol = -(a/X)dX/dr

Parameters
  • d – OMFITtranspData object from the TRANSP tree.

  • rmnmp – OMFITtranspData “RMNMP”

Returns

set_grid(zone='X', dvol=None)

Interpolate 2D TRANSP data to rho grid zone-boundary or zone-centered values.

Parameters
  • d – OMFITtranspData object from the mds+ TRANSP tree.

  • zone (str) – 'XB' for zone-boundary rho, 'X' for zone-centered. 'V' or 'VB' for volume.

  • dvol – OMFITtranspData object ‘dvol’ from the mds+ TRANSP tree. If None, will be taken from Data’s MDStree.

return: OMFITtranspData object on the specified rho grid

Example: Assuming the root is an OMFIT TRANSP module with a loaded run. >> mvisc = OMFITtranspData(root[‘OUTPUTS’][‘TRANSP_OUTPUT’],’MVISC’) >> mvisc[‘XAXIS’] ‘X’ >> print(mvisc[‘DIM_OF’][0][:3]) [ 0.01 0.03 0.05] >> mviscb = set_grid(mvisc,’XB’) >> mviscb[‘XAXIS’] ‘XB’ >> print(mviscb[‘DIM_OF’][0][:3]) [ 0.02 0.04 0.06]

sint(darea=None)

Surface integrate a TRANSP OMFITmdsValue object. Currently only available for objects from the TRANSP tree.

Parameters
  • d (OMFITtranspData) – OMFITtranspData object from the mds+ TRANSP OUTPUTS.TWO_D tree.

  • darea (OMFITtranspData or None) – OMFITtranspData object ‘darea’ from the mds TRANSP tree. If None, will be taken from Data’s MDStree.

Example:

mds = OMFITmds(‘DIII-D’,’transp’,1633030101) cur = OMFITtranspData(mds,’CUR’) da = OMFITtranspData(mds,’DAREA’) curi = cur.sint(darea=da) print(curi[‘DATA’][0,-1]) pcur = OMFITtranspData(mds,’PCUR’) print(pcur[‘DATA’][0]) -> 1.16626e+06 -> 1.16626e+06

vder(dvol=None)

Derivative with respect to volume for TRANSP variables consistent with TRANSP finite differencing methods.

See Solomon’s unvolint.pro

Parameters
  • d (OMFITtranspData) – OMFITtranspData object from the mds+ TRANSP tree.

  • dvol (OMFITtranspData or None) – OMFITtranspData object ‘dvol’ from the mds TRANSP tree. If None, will be taken from the Data’s MDStree.

Returns

dy/dV OMFITtransData object on zone-centered grid.

Example: Assuming the root is an OMFIT TRANSP module with a loaded run. >> mvisc = OMFITtranspData(root[‘OUTPUTS’][‘TRANSP_OUTPUT’],’MVISC’) >> tvisc = vint(mvisc) >> tvisc[‘DATA’][0,-1] # total viscous torque at first time step in Nm 2.0986965 >> mvisc2 = vder(tvisc) >> np.all(np.isclose(mvisc2[‘DATA’][0,:],mvisc[‘DATA’][0,:])) True

vint(dvol=None)

Volume integrate a TRANSP OMFITmdsValue object. Currently only available for objects from the TRANSP tree.

Parameters
  • d (OMFITtranspData) – OMFITtranspData object from the mds+ TRANSP OUTPUTS.TWO_D tree.

  • dvol (OMFITtranspData or None) – OMFITtranspData object ‘dvol’ from the mds TRANSP tree. If None, will be taken from Data’s MDStree.

Example: Assuming the root is an OMFIT TRANSP module with a loaded run. >> mvisc = OMFITtranspData(root[‘OUTPUTS’][‘TRANSP_OUTPUT’],’MVISC’) >> tvisc = vint(mvisc) >> tvisc[‘DATA’][0,-1] # total viscous torque at first time step in Nm 2.0986965 >> mvisc2 = vder(tvisc) >> np.all(np.isclose(mvisc2[‘DATA’][0,:],mvisc[‘DATA’][0,:])) True

xder()

TRANSP style differentiation in rho.

Parameters

d – OMFITtranspData object from the TRANSP tree.

Returns

dy/drho OMFITtranspData object on the other rho grid.

xdiff()

TRANSP convention of simple finite difference along rho (axis=0), including switching of centered/boundary grids.

Parameters

d – OMFITtranspData object from the mds+ TRANSP tree.

Returns

OMFITtranspData differenced on the other rho grid ('X' vs 'XB')

Example:

>> x = Data([‘x’,’TRANSP’],1470670204) >> dx = xdiff(x) >> print(dx[‘XAXIS’]) XB >> print(dx.y[0,:3]) [ 0.02 0.02 0.02]

xint()

TRANSP style integration in rho. UNVALIDATED.

Parameters

d – OMFITtranspData object from the TRANSP tree.

Returns

dy/drho OMFITtranspData object on the other rho grid.

class omfit_classes.omfit_transp.OMFITplasmastate(*args, **kwargs)[source]

Bases: omfit_classes.omfit_nc.OMFITnc

Class for handling TRANSP netcdf statefile (not to be confused with the time-dependent TRANSP output CDF file)

sources = {'pbe': 'Beam power to electrons', 'pbi': 'Beam power to ions', 'pbth': 'Thermalization of beam power to ions', 'pe_trans': 'Total power to electrons', 'peech': 'ECH power to electrons', 'pfuse': 'Fusion alpha power transferred to electrons', 'pfusi': 'Fusion alpha power transferred to thermal ions', 'pfusth': 'Thermalization of fusion alpha power transferred to thermal ions', 'pi_trans': 'Total power to ions', 'picth': 'Direct ion heating power by ICRF', 'pmine': 'Electron heating power by minority ions', 'pmini': 'Ion heating power by minority ions', 'pminth': 'Thermalization of ion heating power by minority ions', 'pohme': 'Ohmic heating power to electrons', 'prad_br': 'Radiated power: bremsstrahlung', 'prad_cy': 'Radiated power: synchrotron', 'prad_li': 'Radiated power: line', 'qie': 'Collisional exchange from ions to electrons', 'sn_trans': 'Particle source', 'tq_trans': 'Angular momentum source torque'}
calcQ()[source]
Returns

fusion gain

summary_sources()[source]

Print summary of integrated sources

to_omas(ods=None, time_index=0, update=['core_profiles', 'equilibrium', 'core_sources'])[source]

translate TRANSP plasmastate file (output of TRXPL and plasmastate of SWIM) to OMAS data structure

Parameters
  • ods – input ods to which data is added

  • time_index – time index to which data is added

Update

list of IDS to update from statefile

Returns

ODS

class omfit_classes.omfit_transp.OMFITtranspOutput(transp_out)[source]

Bases: omfit_classes.sortedDict.OMFITdataset

Class for dynamic serving of TRANSP output data from MDS or CDF

Parameters

transp_out – OMFITnc file, OMFITmds TRANSP tree, or string path to NetCDF file

load()[source]
to_omas()[source]
omfit_classes.omfit_transp.check_TRANSP_run(runid, project=None, tunnel=None)[source]

Function that checks the status of a TRANSP run as reported by the TRANSP MONITOR GRID website: https://w3.pppl.gov/transp/transpgrid_monitor

Parameters
  • runid – runid to be checked

  • project – project (ie. tokamak) of the runid (optional)

  • tunnel – use SOCKS via specified tunnel

Returns

  • None if no matching runid/project is found

  • tuple with 1) True/None/False if run went ok, run is waiting, run failed

    and 2) dictionary with parsed HTML information

omfit_classes.omfit_transp.wait_TRANSP_run(runid, project=None, t_check=5, verbose=True, tunnel=None)[source]

Function that waits for a TRANSP run to end as reported by the TRANSP MONITOR GRID website: https://w3.pppl.gov/transp/transpgrid_monitor

Parameters
  • runid – runid to be checked

  • project – project (ie. tokamak) of the runid (optional)

  • t_check – how often to check (seconds)

  • verbose – print to screen

  • tunnel – use SOCKS via specified tunnel

Returns

  • None if no matching runid/project is found

  • tuple with 1) True/False if run went ok or run failed

    and 2) dictionary with parsed HTML information

omfit_classes.omfit_transp.next_available_TRANSP_runid(runid, project, increment=1, skiplist=[], server=None)[source]

Function which checks what MDS+ tree entries are available

Parameters
  • runid – runid to start checking from (included)

  • project – project [e.g. D3D, ITER, NSTX, …] used to set the MDS+ TRANSP tree and server to be queried

  • increment – positive / negative

  • skiplist – list of runids to skip

  • server – MDS+ TRANSP server to be queried [e.g. ‘atlas.gat.com’, or ‘transpgrid.pppl.gov’]

Returns

tuple with next available runid (as integer with format shotXXYY) and augmented skiplist

class omfit_classes.omfit_transp.OMFITfbm(*args, **kwargs)[source]

Bases: omfit_classes.omfit_nc.OMFITnc

Class for handling NUBEAM FBM distribution function files

plot(rf=None, zf=None, cartesian=True, fig=None)[source]

Plot distribution function

Parameters
  • rf – radial location where to show data in velocity space

  • zf – vertical location where to show data in velocity space

Returns

figure handle

plot_energy_space(rmin, rmax, zmin, zmax, emin=0.0, emax=1000000.0, cartesian=True, ax=None)[source]

Average distribution function over a specified R,Z and plot energy versus pitch

Parameters
  • rmin – minimum R to average over

  • rmax – maximum R to average over

  • zmin – minimum Z to average over

  • zmax – maximum Z to average over

  • emin – minimum energy to average over

  • emax – maximum energy to average over

  • cartesian – plot in energy/pitch space or E_parallel and E_perp

  • ax – axes

to_omas(ods=None, time_index=0)[source]

Save NUBEAM distribution function to OMAS

Parameters
  • ods – input ods to which data is added

  • time_index – time index to which data is added

Returns

updated ODS

omfit_trip3d

class omfit_classes.omfit_trip3d.OMFITtrip3Dcoil(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

load()[source]
save()[source]

The save method is supposed to be overridden by classes which use OMFITobject as a superclass. If left as it is this method can detect if .filename was changed and if so, makes a copy from the original .filename (saved in the .link attribute) to the new .filename

class omfit_classes.omfit_trip3d.OMFITtrip3Dlines(*args, **kwargs)[source]

Bases: omfit_classes.omfit_matrix.OMFITmatrix

The OMFITtrip3Dlines class parses and handles the TRIP3D ‘lines.out’ output file. A self-described xarray object is stored under self[‘data’].

Parameters
  • filename – path to file.

  • bin – def None, filetype is unknown, if True, NetCDF, if False, ASCII.

  • zip – def None, compression is unknown, if False, switched off, if True, on.

  • **kw – keywords for OMFITpath.__init__()

columns = {2: ['psimin', 'psimax'], 9: ['phi', 'rho', 'ith', 'the', 'brho', 'bthe', 'bphi', 'mlen', 'psin'], 11: ['phi', 'rho', 'ith', 'the', 'brho', 'bthe', 'bphi', 'mlen', 'psin', 'rr', 'zz'], 13: ['phi', 'rho', 'ith', 'the', 'brho', 'bthe', 'bphi', 'mlen', 'psin', 'rr', 'zz', 'psimin', 'psimax']}
load(bin=None, zip=None, geqdsk=None, verbose=False, quiet=False)[source]

https://xarray.pydata.org/en/stable/generated/xarray.open_dataarray.html https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html

Parameters
  • bin – def None, load through xarray first, then through pandas, if True, xarray only, if False, pandas only.

  • zip – def False, compression is switched off, if True, on.

  • xrkw – keywords for xarray.open_dataarray()

  • **pdkw – keywords for pandas.read_csv()

Return bin, zip

resulting values for binary, zipped.

to3d()[source]
addattrs(geqdsk=None)[source]
addcols()[source]
toxr(data, attrs=None)[source]
save(zip=True, quiet=False)[source]

https://xarray.pydata.org/en/stable/generated/xarray.DataArray.to_netcdf.html https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_csv.html

Parameters
  • bin – def None, save through xarray first, then pandas, if True, xarray only, if False, pandas only.

  • zip – def False, compression is switched off, if True, on.

  • xrkw – keywords for xarray.to_netcdf()

  • **pdkw – keywords for pandas.to_csv()

Return bin, zip

resulting values for binary, zipped.

prep(type, prefix=True, quiet=False)[source]
doplot(type, geqdsk=None, lines=(None, None, None), points=(None, None, None), limx=(None, None), limy=(None, None), col='min', cbar=True, prefix=True, quiet=False, **kw)[source]
plot(type='summary', **kw)[source]
c = 9
class omfit_classes.omfit_trip3d.OMFITtrip3Dhits(*args, **kwargs)[source]

Bases: omfit_classes.omfit_matrix.OMFITmatrix

The OMFITtrip3Dhits class parses and handles the TRIP3D ‘hit.out’ output file. A self-described xarray object is stored under self[‘data’].

Parameters
  • filename – path to file.

  • bin – def None, filetype is unknown, if True, NetCDF, if False, ASCII.

  • zip – def None, compression is unknown, if False, switched off, if True, on.

  • **kw – keywords for OMFITpath.__init__()

load(bin=None, zip=None, inverse=True, geqdsk=None, verbose=False, quiet=False)[source]

https://xarray.pydata.org/en/stable/generated/xarray.open_dataarray.html https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html

Parameters
  • bin – def None, load through xarray first, then through pandas, if True, xarray only, if False, pandas only.

  • zip – def False, compression is switched off, if True, on.

  • xrkw – keywords for xarray.open_dataarray()

  • **pdkw – keywords for pandas.read_csv()

Return bin, zip

resulting values for binary, zipped.

to3d()[source]
addattrs(inverse=True, geqdsk=None)[source]
addcols()[source]
toxr(data, attrs=None)[source]
save(zip=False, quiet=False)[source]

https://xarray.pydata.org/en/stable/generated/xarray.DataArray.to_netcdf.html https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_csv.html

Parameters
  • bin – def None, save through xarray first, then pandas, if True, xarray only, if False, pandas only.

  • zip – def False, compression is switched off, if True, on.

  • xrkw – keywords for xarray.to_netcdf()

  • **pdkw – keywords for pandas.to_csv()

Return bin, zip

resulting values for binary, zipped.

prep(project=False, force=False, save=None, prefix=True, quiet=False)[source]
plot(mlen=100, sol=False, cbar=True, inverse=False, prefix=True, quiet=False, **kw)[source]

The OMFITtrip3Dhits.plot plots the footprint based on the data prepared by OMFITtrip3Dhits.prep.

Parameters
  • mlen – if None, select all the data points, if MIN, select points with MIN <= mlen, if (MIN,MAX), MIN <= mlen <= MAX.

  • cbar – if True, the colorbar will be plotted, if False, hidden.

  • inverse – if False, y-axis is normal, if True, inverted.

  • quiet – if False, output to console is normal, if True, suppressed.

  • **kw – keyword dictionary to be passed to scatter().

Returns

None

class omfit_classes.omfit_trip3d.OMFITtrip3Dstart(*args, **kwargs)[source]

Bases: omfit_classes.omfit_matrix.OMFITmatrix

The OMFITtrip3Dstart class parses and handles the TRIP3D start points input file. A self-described xarray object is stored under self[‘data’].

Parameters
  • filename – path to file.

  • bin – def None, filetype is unknown, if True, NetCDF, if False, ASCII.

  • zip – def None, compression is unknown, if False, switched off, if True, on.

  • **kw – keywords for OMFITpath.__init__()

load()[source]

https://xarray.pydata.org/en/stable/generated/xarray.open_dataarray.html https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html

Parameters
  • bin – def None, load through xarray first, then through pandas, if True, xarray only, if False, pandas only.

  • zip – def False, compression is switched off, if True, on.

  • xrkw – keywords for xarray.open_dataarray()

  • **pdkw – keywords for pandas.read_csv()

Return bin, zip

resulting values for binary, zipped.

save()[source]

https://xarray.pydata.org/en/stable/generated/xarray.DataArray.to_netcdf.html https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_csv.html

Parameters
  • bin – def None, save through xarray first, then pandas, if True, xarray only, if False, pandas only.

  • zip – def False, compression is switched off, if True, on.

  • xrkw – keywords for xarray.to_netcdf()

  • **pdkw – keywords for pandas.to_csv()

Return bin, zip

resulting values for binary, zipped.

toxr(data=None)[source]
plot(radvar=None, geqdsk=None, pol=True, tor=True, lim=True, surf=True, **kw)[source]
class omfit_classes.omfit_trip3d.OMFITtrip3Dprobeg(*args, **kwargs)[source]

Bases: omfit_classes.omfit_matrix.OMFITmatrix

The OMFITtrip3Dprobeg class parses and handles the TRIP3D ‘probe_gb.out’ output file. A self-described xarray object is stored under self[‘data’].

Parameters
  • filename – path to file.

  • bin – def None, filetype is unknown, if True, NetCDF, if False, ASCII.

  • zip – def None, compression is unknown, if False, switched off, if True, on.

  • **kw – keywords for OMFITpath.__init__()

load()[source]

https://xarray.pydata.org/en/stable/generated/xarray.open_dataarray.html https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html

Parameters
  • bin – def None, load through xarray first, then through pandas, if True, xarray only, if False, pandas only.

  • zip – def False, compression is switched off, if True, on.

  • xrkw – keywords for xarray.open_dataarray()

  • **pdkw – keywords for pandas.read_csv()

Return bin, zip

resulting values for binary, zipped.

save()[source]

https://xarray.pydata.org/en/stable/generated/xarray.DataArray.to_netcdf.html https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_csv.html

Parameters
  • bin – def None, save through xarray first, then pandas, if True, xarray only, if False, pandas only.

  • zip – def False, compression is switched off, if True, on.

  • xrkw – keywords for xarray.to_netcdf()

  • **pdkw – keywords for pandas.to_csv()

Return bin, zip

resulting values for binary, zipped.

toxr(data=None)[source]
plot(cols=['Bpol', 'Bmag'], phi=None, geqdsk=None, stats=True, **kw)[source]

omfit_tsc

class omfit_classes.omfit_tsc.OMFITtsc(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

OMFIT class used to interface with TSC input files

Parameters
  • filename – filename passed to OMFITascii class

  • **kw – keyword dictionary passed to OMFITascii class

naming = '\n 00 &  Control &  IRST1 & IRST2 & IPEST & NCYCLE &  NSKIPR & NSKIPL &  IMOVIE\n 01 &  Dimensions &  NX & NZ & ALX & ALZ & ISYM & CCON & IDATA\n 02 &  Time step &  DTMINS & DTMAXS & DTFAC & LRSWTCH & IDENS & IPRES & IFUNC\n 03 &  Numerical &  XLIM & ZLIM & XLIM2 & FFAC & NDIV & ICIRC & ISVD\n 04 &  Surf. Ave. &  ISURF & NPSI & NSKIPSF & TFMULT &  ALPHAR & BETAR & ITRMOD\n 05 &  Limiter &  I & XLIMA(I) & ZLIMA(I) & XLIMA(I+1) & ZLIMA(I+1) & XLIMA(I+2) & ZLIM(I+2)\n 06 &  Divertor &  IDIV & PSIRAT & X1SEP & X2SEP & Z1SEP & Z2SEP & NSEPMAX\n 07 &  Impurities &  IIMP & ILTE & IMPBND & IMPPEL & AMGAS & ZGAS & NTHE\n 08 &  Obs. pairs &  J & XOBS(2J-1) & ZOBS(2J-1) & XOBS(2J) & ZOBS(2J) &  NPLOTOBS\n 09 &  Ext. coils &  N & XCOIL(N) & ZCOIL(N) & IGROUPC(N) & ATURNSC(N) & RSCOILS(N) & AINDC(N)\n 10 &  Int. coils &  M & XWIRE(M) & ZWIRE(M) & IGROUPW(M) & ATURNSW(M) & RSWIRES(M) & CWICS(M)\n 11 &  ACOEF  &  ICO & NCO & ACOEF(ICO) & $\\ldots$(ICO+1) & $\\ldots$ & $\\ldots$ &  $\\ldots$(ICO+4)\n 12 &  Tranport &  TEVV & DCGS & QSAW & ZEFF & IALPHA & IBALSW & ISAW\n 13 &  Init. cond-1 &  ALPHAG & ALPHAP & NEQMAX & XPLAS & ZPLAS & GZERO & QZERO\n 14 &  Init. cond-2 &  ISTART & XZERIC & AXIC & ZZERIC & BZIC\n 15 &  Coil groups &  IGROUP & GCUR(1)  & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ & GCUR(6)\n 16 &  Plasma curr. &   -  & PCUR(1)  & $\\ldots$ & $\\ldots$ & $\\ldots$  & $\\ldots$ & PCUR(6)\n 17 &  Plasma press. &   -  & PPRES(1)  & $\\ldots$ & $\\ldots$ & $\\ldots$  & $\\ldots$ & PPRES(6)\n 18 &  Timing &   -  & TPRO(1)  & $\\ldots$ & $\\ldots$ & $\\ldots$  & $\\ldots$ &  TPRO(6)\n 19 &  Feedback-1 &   L  & NRFB(L)  & NFEEDO(L) & FBFAC(L) & FBCON(L)  & IDELAY(L) & FBFACI(L)\n 20 &  Feedback-2 &   L  & TFBONS(L)  & TFBOFS(L) & FBFAC1(L) & FBFACD(L) & IPEXT(L)\n 21 &  Contour plot &   ICPLET  & ICPLGF & ICPLWF & ICPLPR & ICPLBV & ICPLUV & ICPLXP\n 22 &  Vector plot &   IVPLBP  & IVPLVI  & IVPLFR & IVPLJP & IVPLVC & IVPLVT & -\n 23 &  Aux. heat &   -  & BEAMP(1)  & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ & BEAMP(6)\n 24 &  Density &   -  & RNORM(1)  & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ &  RNORM(6)\n 25 &  Dep. prof. &   ABEAM  & DBEAM  & NEBEAM & EBEAMKEV & AMBEAM  & FRACPAR & IBOOTST\n 26 &  Anom. trans. &   - & FBCHIA(1)  & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ &  FBCHIA(6)\n 27 &  Tor. field &   - & GZEROV(1)  & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ & GZEROV(6)\n 28 &  Loop volt. &   - & VLOOPV(1)  & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ & VLOOPV(6)\n 29 &  PEST output&   - & TPEST(1)  & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ & TPEST(6)\n 30 &  Mag. Axis(x) &   - & XMAGO(1) & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ & XMAGO(6)\n 31 &  Mag. Axis(z) &   - & ZMAGO(1) & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ & ZMAGO(6)\n 32 &  Divertor &  N & XLPLATE(N) & ZLPLATE(N) & XRPLATE(N) & ZRPLATE(N) & FPLATE(N,1) & FPLATE(N,2)\n 33 &  Coil grp-2 & IGROUP & RESGS( )\n 34 &  TEVV(t) &   -  & TEVVO(1)  & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ & TEVVO(6)\n 35 &  FFAC(t) &   -  & FFACO(1)  & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ & FFACO(6)\n 36 &  ZEFF(t) &   -  & ZEFFV(1)  & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ & ZEFFV(6)\n 37 &  Volt group &   IGROUP & GVOLT(1) & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ & GVOLT(6)\n 38 &  LHDEP &   ILHCD & VILIM  & FREQLH & AION & ZION & CPROF & IFK\n 39 &  Ext. coil-2 &   N & DXCOIL(N)  & DZCOIL(N) & FCU(N) & FSS(N) & TEMPC(N) & CCICS(N)\n 40 &  Noplot &  NOPLOT(1) & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ & NOPLOT(7)\n 41 &  Ripple &   IRIPPL & NTFCOIL & RIPMAX & RTFCOIL & NPITCH & RIPMULT & IRIPMOD\n 42 &  Major rad. &  - &  RZERV(1) & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ & RZERV(6)\n 43 &  Minor rad. &  - &  AZERV(1) & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ & AZERV(6)\n 44 &  Ellipticity &  - & EZERV(1) & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ & EZERV(6)\n 45 &  Triangularity &  - & DZERV(1)& $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ & DZERV(6)\n 46 &  LH heating &  - &  PLHAMP(1) & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ & PLHAMP(6)\n 47 &  Dens. exp-1 &  - &  ALPHARV(1) & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ & ALPHARV(6)\n 48 &  Dens. exp-2 & - &  BETARV(1) & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ & BETARV(6)\n 49 &  Multipole &  N & MULTN(N) & ROMULT(N) & IGROUPM(N) &  ATURNSM(N)\n 50 &  CD &  - &  FRACPAR(1) & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ & FRACPAR(6)\n 51 &  alh & -& A(1) & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ &  A(6)\n 52 &  dlh & -& D(1) & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ & D(6)\n 53 &  a1lh & -& A1(1) & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ & A1(6)\n 54 &  a2lh & -& A2(1) & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ & A2(6)\n 55 &  ac & -& AC(1) & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ & AC(6)\n 56 &  dc & -& DC(1) & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ & DC(6)\n 57 &  ac1 & -& AC1(1) & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ & AC1(6)\n 58 &  ac2 & -& AC2(1) & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ & AC2(6)\n 59 &  ICRH & -& PICRH(1) & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ &  PICRH(6)\n 60 &  Halo Temp & - &  TH(1) & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ & TH(6)\n 61 &  Halo Width & - &  AH(1) & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ &  AH(6)\n 62 &  X-Shape point & - &  XCON0(1) & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ &  XCON0(6)\n 63 &  Z-Shape point & - &  ZCON0(1) & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ &  ZCON0(6)\n 64 &  Fast Wave J & - &  FWCD(1) & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ &  FWCD(6)\n 65 &  ICRH power profile &   &  A(1) & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ &  A(6)\n 66 &  ICRH power profile &   &  D(1) & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ &  D(6)\n 67 &  ICRH power profile &   &  A1(1) & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ &  A1(6)\n 68 &  ICRH power profile &   &  A2(1) & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ &  A2(6)\n 69 &  ICRH current profile &   &  A(1) & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ &  A(6)\n 70 &  ICRH current profile &   &  D(1) & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ &  D(6)\n 71 &  ICRH current profile &   &  A1(1) & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ &  A1(6)\n 72 &  ICRH current profile &   &  A2(1) & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ &  A2(6)\n 73 &  He conf. time & - &  HEACT(1) & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ &  HEACT(6)\n 74 &  UFILE output & - &  TUFILE(1) & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ &  TUFILE(6)\n 75 &  Sawtooth time & - &  SAWTIME(1) & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ &  SAWTIME(6)\n 76 &  Anom. ion trans. &   - & FBCHIIA(1)  & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ &  FBCHIIA(6)\n 77 &  acoef(123) & - &  qadd(1) & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ &  qadd(6)\n 78 &  acoef(3003) & - &  fhmodei(1) & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ &  fhmodei(6)\n 79 &  acoef(3011) & - &  pwidthc(1) & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ &  pwidthc(6)\n 80 &  acoef(3006) & - &  chiped(1) & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ &  chiped(6)\n 81 &  acoef(3102) & - &  tped(1) & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ &  tped(6)\n 82 &  impurity fraction &  imptype &  frac(1) & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ &  frac(6)\n 83 &  acoef(3012) & - &  nflag(1) & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ &  nflag(6)\n 84 &  acoef(3013) & - &  expn1(1) & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ &  expn1(6)\n 85 &  acoef(3014) & - &  expn2(1) & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ &  expn2(6)\n 86 &  acoef(3004) & - &  firitb(1) & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ &  firitb(6)\n 87 &  acoef(3005) & - &  secitb(1) & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ &  secitb(6)\n 88 &  acoef(881) & - &  fracno(1) & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ &  fracno(6)\n 89 &  acoef(889) & - &  newden(1) & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ &  newden(6)\n 90 &  ECRH Power (MW) &  & PECRH(1) & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ &  PECRH(6)\n 91 &  ECCD Toroidal Current (MA) &  & ECCD(1) & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ &  ECCD(6)\n 92 &  Sh. Par. "a" (ECCD H CD) &  & AECD(1) & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ &  AECD(6)\n 93 &  Sh. Par. "d" (ECCD H CD) &  & DECD(1) & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ &  DECD(6)\n 94 &  Sh. Par. "a1" (ECCD H CD) &  & A1ECD(1) & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ &  A1ECD(6)\n 95 &  Sh. Par. "a2" (ECCD H CD) &  & A2ECD(1) & $\\ldots$ & $\\ldots$ & $\\ldots$ & $\\ldots$ &  A2ECD(6)\n 99 &\n'
load()[source]
save()[source]

The save method is supposed to be overridden by classes which use OMFITobject as a superclass. If left as it is this method can detect if .filename was changed and if so, makes a copy from the original .filename (saved in the .link attribute) to the new .filename

omfit_uda

class omfit_classes.omfit_uda.OMFITudaValue(server, shot=None, TDI=None, **kw)[source]

Bases: object

A wrapper for pyuda calls that behaves like OMFITmdsValue

Parameters
  • server – The device or server from which to get a signal

  • shot – Which shot

  • TDI – The signal to fetch

  • **kw – Additional keywords that OMFITmdsValue might take, but are not necessary for UDA

data()[source]
dim_of(index=0)[source]
units()[source]
units_dim_of(index=0)[source]
xarray()[source]
Returns

DataArray with information from this node

omfit_uedge

class omfit_classes.omfit_uedge.OMFITuedge(*args, **kwargs)[source]

Bases: omfit_classes.omfit_hdf5.OMFIThdf5

plot_mesh(ax=None, **kw)[source]

Plot mesh

Parameters
  • ax – plot axes

  • **kw – extra arguments passed to plot

plot_box(variable='te', ax=None, levels=None, edgecolor='none', **kw)[source]

Plot UEDGE results for a box mesh

Parameters
  • variable – variable to plot

  • ax – axes

  • levels – number of levels in contour plot

  • edgecolor – edgecolor of contour plot

  • **kw – extra arguments passed to contour plot

plot(variable='te', ax=None, edgecolor='none', **kw)[source]

Plot UEDGE results for a poloidal mesh

Parameters
  • variable – variable to plot

  • ax – axes

  • levels – number of levels in contour plot

  • edgecolor – edgecolor of contour plot

  • **kw – extra arguments passed to PatchCollection

nice_vars(variable)[source]

return info for a given UEDGE output variable

Parameters

variable – variable name

Returns

dictionary with units, label, norm

class omfit_classes.omfit_uedge.uedge_variable[source]

Bases: str

class used to distinguish between strings and uedge variables in the PyUEDGE input deck

omfit_classes.omfit_uedge.uedge_common_mapper(var=None)[source]

Parses the uedge_common_map.json and caches it for later use

Parameters

var – either variable name or None

Returns

mapper if var is None else mapping info for var

omfit_classes.omfit_uedge.uedge_common_map_generator(uedge_dir)[source]

Parse the bbb.v com.v flx.v grd.v files and generate mapper for common blocks

This is useful because the BASIS version of UEDGE used not to require specifying the common blocks for variables. Transition to PyUEDGE requires people to add the common block information to the old input files.

To translate old UEDGE files to the new PyUEDGE format use: >> OMFITuedgeBasisInput(‘old_uedge_input.txt’).convert()

Parameters

uedge_dir – folder where UEDGE is installed

Returns

OMFITjson object containing with mapping info

class omfit_classes.omfit_uedge.OMFITuedgeBasisInput(*args, **kwargs)[source]

Bases: omfit_classes.omfit_ascii.OMFITascii, omfit_classes.sortedDict.SortedDict

Class used to parse input files from BASIS version of UEDGE

To translate old UEDGE files to the new PyUEDGE format use: >> OMFITuedgeBasisInput(‘old_uedge_input.txt’).convert()

load()[source]
convert()[source]

Convert input files from BASIS version of UEDGE to Python version

Returns

OMFITuedgeInput object

class omfit_classes.omfit_uedge.OMFITuedgeInput(*args, **kwargs)[source]

Bases: omfit_classes.omfit_ascii.OMFITascii, omfit_classes.sortedDict.SortedDict

Class used to parse input files of Python version of UEDGE

load()[source]
save()[source]

The save method is supposed to be overridden by classes which use OMFITobject as a superclass. If left as it is this method can detect if .filename was changed and if so, makes a copy from the original .filename (saved in the .link attribute) to the new .filename

omfit_ufile

class omfit_classes.omfit_ufile.OMFITuFile(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

Class used to interface with TRANSP U-files

Parameters
  • filename – filename passed to OMFITobject class

  • **kw – keyword dictionary passed to OMFITobject class

load()[source]

Method used to load the content of the file specified in the .filename attribute

Returns

None

save()[source]

Save Ufile to the file specified in the .filename attribute

Returns

None

plot(axes=None, figure=None, cmap=None, **kw)[source]

Plot Ufile content

Parameters
  • axes – Axes object or None

  • figure – Figure object or None

  • cmap – Color map name used for multi-dimensional plots.

  • **kw – Extra key word arguments passed to matplotlib plot function.

Returns

Figure

smooth(window_x=None, window_len=11, window='hanning', axis=- 1)[source]

This built in function makes use of the OMFIT utils.smooth function to smooth over a single dimension of the data.

If the axis in question is irregular, the data is first linearly interpolated onto a regular grid with spacing equal to the minimum step size of the irregular grid.

Parameters
  • window_x – Smoothing window size in axis coordinate units.

  • window_len – Smoothing window size in index units. Ignored if window_x present. Enforced odd integer.

  • window – the type of window from ‘flat’, ‘hanning’, ‘hamming’, ‘bartlett’, ‘blackman’ flat window will produce a moving average smoothing.

  • axis – Dimension over which to smooth. Accepts integer (0), key (‘X0’), or name (‘TIME’).

Returns

OMFITuFile with the requested dimension smoothed

crop(xmin, xmax, axis=- 1, endpoint=True)[source]

Crop ufile data to only include points within the specified range along a given axis. Modifies the ufile object in place.

Parameters
  • xmin – Lower bound.

  • xmax – Upper bound.

  • axis – Dimension to bound. Accepts integer (0), key (‘X0’), or name (‘TIME’).

  • endpoint – Keeps an extra point on either end, ensuring the bounds are contained within the range of ufile data.

from_OMFITprofiles(OMFITprofiles_fit)[source]

populate u-file based on OMFITprofiles fit xarray DataArray

Params OMFITprofiles_fit

input OMFITprofiles fit xarray DataArray

reset()[source]

Set up basic bare bones structure

omfit_xml

class omfit_classes.omfit_xml.OMFITxml(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

OMFIT class used to interface with XML input files

Parameters
  • filename – filename passed to OMFITascii class

  • **kw – keyword dictionary passed to OMFITascii class

load()[source]
save()[source]

The save method is supposed to be overridden by classes which use OMFITobject as a superclass. If left as it is this method can detect if .filename was changed and if so, makes a copy from the original .filename (saved in the .link attribute) to the new .filename

omfit_yaml

class omfit_classes.omfit_yaml.OMFITyaml(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict, omfit_classes.omfit_ascii.OMFITascii

OMFIT class to read/write yaml files

OMFIT class to parse yaml files

Parameters
  • filename – filename of the yaml file

  • **kw – arguments passed to __init__ of OMFITascii

load()[source]
save()[source]

The save method is supposed to be overridden by classes which use OMFITobject as a superclass. If left as it is this method can detect if .filename was changed and if so, makes a copy from the original .filename (saved in the .link attribute) to the new .filename

namelist

omfit_classes.namelist.interpreter(orig, escaped_strings=True)[source]

Parse string value in a fortran namelist format

NOTE: for strings that are arrays of elements one may use the following notation: >> lines = ‘1 1.2 2.3 4.5 8*5.6’ >> values = [] >> for item in re.split(‘[ | ]+’, line.strip()): >> values.extend(tolist(namelist.interpreter(item)))

Parameters
  • orig – string value element in a fortran namelist format

  • escaped_strings – do strings follow proper escaping

Returns

parsed namelist element

omfit_classes.namelist.array_encoder(orig, line='', level=0, separator_arrays=' ', compress_arrays=True, max_array_chars=79, escaped_strings=True)[source]
omfit_classes.namelist.encoder(orig, escaped_strings=True, dotBoolean=True, compress_arrays=True, max_array_chars=79)[source]
class omfit_classes.namelist.NamelistName(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict

Defines variables defined within a &XXX and / delimiter in a FOTRAN namelist

collectArrays(_dimensions=None, **input_dimensions)[source]

This function collects the multiple namelist arrays into a single one:

collectArrays(**{'__default__':0,        # default value for whole namelist (use when no default is found)
                 'BCMOM':{               # options for specific entry in the namelist
                     'default':3,        # a default value must be defined to perform math ops (automatically set by a=... )
                     'shape':(30,30),    # this overrides automatic shape detection (automatically set by a(30,30)=...)
                     'offset':(-10,-10), # this overrides automatic offset detection (automatically set to be the minimum of the offsets of the entries in all dimensions a(-10,-10)=...)
                     'dtype':0}          # this overrides automatic type detection (automatically set to float if at least one float is found)
                 })
collect(value)[source]
class omfit_classes.namelist.NamelistFile(*args, **kwargs)[source]

Bases: omfit_classes.namelist.NamelistName

FORTRAN namelist file object, which can contain multiple namelists blocks

Parameters
  • filename – filename to be parsed

  • input_string – input string to be parsed (preceeds filename)

  • nospaceIsComment – whether a line which starts without a space should be retained as a comment. If None, a “smart” guess is attempted

  • outsideOfNamelistIsComment – whether the content outside of the namelist blocks should be retained as comments. If None, a “smart” guess is attempted

  • retain_comments – whether comments should be retained or discarded

  • skip_to_symbol – string to jump to for the parsing. Content before this string is ignored

  • collect_arrays – whether arrays defined throughout the namelist should be collected into single entries (e.g. a=5,a(1,4)=0)

  • multiDepth – wether nested namelists are allowed

  • bang_comment_symbol – string containing the characters that should be interpreted as comment delimiters.

  • equals – how the equal sign should be written when saving the namelist

  • compress_arrays – compress repeated elements in an array by using v*n namelist syntax

  • max_array_chars – wrap long array lines

  • explicit_arrays – (True,False,1) whether to place name(1) in front of arrays. If 1 then (1) is only placed in front of arrays that have only one value.

  • separator_arrays – characters to use between array elements

  • split_arrays – write each array element explicitly on a separate line Specifically this functionality was introduced to split TRANSP arrays

  • idlInput – whether to interpret the namelist as IDL code

parse(content)[source]
saveas(filename)[source]

save namelist to a new file

save(fp=None)[source]
load(filename='')[source]

Load namelist from file.

class omfit_classes.namelist.fortran_environment(nml)[source]

Bases: object

Environment class used to allow FORTRAN index convention of sparrays in a namelist

class omfit_classes.namelist.sparray(shape=None, default=nan, dtype=None, wrap_dim=0, offset=0, index_offset=False)[source]

Bases: object

Class for n-dimensional sparse array objects using Python’s dictionary structure. based upon: http://www.janeriksolem.net/2010/02/sparray-sparse-n-dimensional-arrays-in.html

dense()[source]

Convert to dense NumPy array

sum()[source]

Sum of elements

fortran(index, value)[source]
isnan(x)[source]
fortran_repr()[source]
lt(other)[source]
le(other)[source]
eq(other)[source]
ge(other)[source]
gt(other)[source]
is_(other)[source]
is_not(other)[source]
add(other)[source]
and_(other)[source]
floordiv(other)[source]
index(other)[source]
lshift(other)[source]
mod(other)[source]
mul(other)[source]
matmul(other)[source]
or_(other)[source]
pow(other)[source]
rshift(other)[source]
sub(other)[source]
truediv(other)[source]
xor(other)[source]
bool()[source]
real()[source]
imag()[source]
not_()[source]
truth()[source]
abs()[source]
inv()[source]
invert()[source]
neg()[source]
pos()[source]
copy()[source]

fluxSurface

class omfit_classes.fluxSurface.fluxSurfaces(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict

Trace flux surfaces and calculate flux-surface averaged and geometric quantities Inputs can be tables of PSI and Bt or an OMFITgeqdsk file

Parameters
  • Rin – (ignored if gEQDSK!=None) array of the R grid mesh

  • Zin – (ignored if gEQDSK!=None) array of the Z grid mesh

  • PSIin – (ignored if gEQDSK!=None) PSI defined on the R/Z grid

  • Btin – (ignored if gEQDSK!=None) Bt defined on the R/Z grid

  • Rcenter – (ignored if gEQDSK!=None) Radial location where the vacuum field is defined ( B0 = F[-1] / Rcenter)

  • F – (ignored if gEQDSK!=None) F-poloidal

  • P – (ignored if gEQDSK!=None) pressure

  • rlim – (ignored if gEQDSK!=None) array of limiter r points (used for SOL)

  • zlim – (ignored if gEQDSK!=None) array of limiter z points (used for SOL)

  • gEQDSK – OMFITgeqdsk file or ODS

  • resolution – if int the original equilibrium grid will be multiplied by (resolution+1), if float the original equilibrium grid is interpolated to that resolution (in meters)

  • forceFindSeparatrix – force finding of separatrix even though this may be already available in the gEQDSK file

  • levels – levels in normalized psi. Can be an array ranging from 0 to 1, or the number of flux surfaces

  • map – array ranging from 0 to 1 which will be used to set the levels, or ‘rho’ if flux surfaces are generated based on gEQDSK

  • maxPSI – (default 0.9999)

  • calculateAvgGeo – Boolean which sets whether flux-surface averaged and geometric quantities are automatically calculated

  • quiet – Verbosity level

  • **kw – overwrite key entries

>> OMFIT[‘test’]=OMFITgeqdsk(OMFITsrc+’/../samples/g133221.01000’) >> # copy the original flux surfaces >> flx=copy.deepcopy(OMFIT[‘test’][‘fluxSurfaces’]) >> # to use PSI >> mapping=None >> # to use RHO instead of PSI >> mapping=OMFIT[‘test’][‘RHOVN’] >> # trace flux surfaces >> flx.findSurfaces(np.linspace(0,1,100),mapping=map) >> # to increase the accuracy of the flux surface tracing (higher numbers –> smoother surfaces, more time, more memory) >> flx.changeResolution(2) >> # plot >> flx.plot()

load()[source]
findSurfaces(levels=None, map=None)[source]

Find flux surfaces at levels

Parameters

levels – defines at which levels the flux surfaces will be traced

  • None: use levels defined in gFile

  • Integer: number of levels

  • list: list of levels to find surfaces at

Parameters

map – psi mapping on which levels are defined (e.g. rho as function of psi)

changeResolution(resolution)[source]
Parameters

resolution – resolution to use when tracing flux surfaces

  • integer: multiplier of the original table

  • float: grid resolution in meters

resample(npts=None, technique='uniform', phase='Xpoint')[source]

resample number of points on flux surfaces

Parameters
  • npts – number of points

  • technique – ‘uniform’,’separatrix’,’pest’

  • phase – float for poloidal angle or ‘Xpoint’

surfAvg(function=None)[source]

Flux surface averaged quantity for each flux surface

Parameters

function – function which returns the value of the quantity to be flux surface averaged at coordinates r,z

Returns

array of the quantity fluxs surface averaged for each flux surface

Example

>> def test_avg_function(r, z): >> return RectBivariateSplineNaN(Z, R, PSI, k=1).ev(z,r)

surface_integral(what)[source]

Cross section integral of a quantity

Parameters

what – quantity to be integrated specified as array at flux surface

Returns

array of the integration from core to edge

volume_integral(what)[source]

Volume integral of a quantity

Parameters

what – quantity to be integrated specified as array at flux surface

Returns

array of the integration from core to edge

plotFigure(*args, **kw)[source]
plot(only2D=False, info=False, label=None, **kw)[source]
rz_miller_geometry(poloidal_resolution=101)[source]

return R,Z coordinates for all flux surfaces from miller geometry coefficients in geo # based on gacode/gapy/src/gapy_geo.f90

Parameters

poloidal_resolution – integer with number of equispaced points in toroidal angle, or array of toroidal angles

Returns

2D arrays with (R, Z) flux surface coordinates

sol(levels=31, packing=3, resolution=0.01, rlim=None, zlim=None, open_flx=None)[source]

Trace open field lines flux surfaces in the SOL

Parameters
  • levels

    where flux surfaces should be traced

    • integer number of flux surface

    • list of levels

  • packing – if levels is integer, packing of flux surfaces close to the separatrix

  • resolution – accuracy of the flux surface tracing

  • rlim – list of R coordinates points where flux surfaces intersect limiter

  • zlim – list of Z coordinates points where flux surfaces intersect limiter

  • open_flx – dictionary with flux surface rhon value as keys of where to calculate SOL (passing this will not set the sol entry in the flux-surfaces class)

Returns

dictionary with SOL flux surface information

to_omas(ods=None, time_index=0)[source]

translate fluxSurfaces class to OMAS data structure

Parameters
  • ods – input ods to which data is added

  • time_index – time index to which data is added

Returns

ODS

from_omas(ods, time_index=0)[source]

populate fluxSurfaces class from OMAS data structure

Parameters
  • ods – input ods to which data is added

  • time_index – time index to which data is added

Returns

ODS

class omfit_classes.fluxSurface.fluxSurfaceTraces(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict

deploy(filename=None, frm='arrays')[source]
load(filename)[source]
omfit_classes.fluxSurface.boundaryShape(a, eps, kapu, kapl, delu, dell, zetaou, zetaiu, zetail, zetaol, zoffset, upnull=False, lonull=False, npts=90, doPlot=False, newsq=array([0.0, 0.0, 0.0, 0.0]), **kw)[source]

Function used to generate boundary shapes based on T. C. Luce, PPCF, 55 9 (2013) Direct Python translation of the IDL program /u/luce/idl/shapemaker3.pro

Parameters
  • a – minor radius

  • eps – aspect ratio

  • kapu – upper elongation

  • lkap – lower elongation

  • delu – upper triangularity

  • dell – lower triangularity

  • zetaou – upper outer squareness

  • zetaiu – upper inner squareness

  • zetail – lower inner squareness

  • zetaol – lower outer squareness

  • zoffset – z-offset

  • upnull – toggle upper x-point

  • lonull – toggle lower x-point

  • npts – int number of points (per quadrant)

  • doPlot – plot boundary shape construction

  • newsq – A 4 element array, into which the new squareness values are stored

Returns

tuple with arrays of r,z,zref

>> boundaryShape(a=0.608,eps=0.374,kapu=1.920,kapl=1.719,delu=0.769,dell=0.463,zetaou=-0.155,zetaiu=-0.255,zetail=-0.174,zetaol=-0.227,zoffset=0.000,upnull=False,lonull=False,doPlot=True)

class omfit_classes.fluxSurface.BoundaryShape(*args, **kwargs)[source]

Bases: omfit_classes.sortedDict.SortedDict

Class used to generate boundary shapes based on T. C. Luce, PPCF, 55 9 (2013)

Parameters
  • a – minor radius

  • eps – inverse aspect ratio (a/R)

  • kapu – upper elongation

  • kapl – lower elongation

  • delu – upper triangularity

  • dell – lower triangularity

  • zetaou – upper outer squareness

  • zetaiu – upper inner squareness

  • zetail – lower inner squareness

  • zetaol – lower outer squareness

  • zoffset – z-offset

  • upnull – toggle upper x-point

  • lonull – toggle lower x-point

  • rbbbs – R boundary points

  • zbbbs – Z boundary points

  • rlim – R limiter points

  • zlim – Z limiter points

  • npts – int number of points (per quadrant)

  • doPlot – plot boundary shape

Returns

tuple with arrays of r,z,zref

>> BoundaryShape(a=0.608,eps=0.374,kapu=1.920,kapl=1.719,delu=0.769,dell=0.463,zetaou=-0.155,zetaiu=-0.255,zetail=-0.174,zetaol=-0.227,zoffset=0.000,doPlot=True)

plot(fig=None)[source]
sameBoundaryShapeAs(rbbbs=None, zbbbs=None, upnull=None, lonull=None, gEQDSK=None, verbose=None, npts=90)[source]

Measure boundary shape from input

Parameters
  • rbbbs – array of R points to match

  • zbbbs – array of Z points to match

  • upnull – upper x-point

  • lonull – lower x-point

  • gEQDSK – input gEQDSK to match (wins over rbbbs and zbbbs)

  • verbose – print debug statements

  • npts – int Number of points

Returns

dictionary with parameters to feed to the boundaryShape function [a, eps, kapu, kapl, delu, dell, zetaou, zetaiu, zetail, zetaol, zoffset, upnull, lonull]

fitBoundaryShape(rbbbs=None, zbbbs=None, upnull=None, lonull=None, gEQDSK=None, verbose=None, precision=0.001, npts=90)[source]

Fit boundary shape from input

Parameters
  • rbbbs – array of R points to match

  • zbbbs – array of Z points to match

  • upnull – upper x-point

  • lonull – lower x-point

  • gEQDSK – input gEQDSK to match (wins over rbbbs and zbbbs)

  • verbose – print debug statements

  • doPlot – visualize match

  • precision – optimization tolerance

  • npts – int Number of points

Returns

dictionary with parameters to feed to the boundaryShape function [a, eps, kapu, kapl, delu, dell, zetaou, zetaiu, zetail, zetaol, zoffset, upnull, lonull]

omfit_classes.fluxSurface.fluxGeo(inputR, inputZ, lcfs=False, doPlot=False)[source]

Calculate geometric properties of a single flux surface

Parameters
  • inputR – R points

  • inputZ – Z points

  • lcfs – whether this is the last closed flux surface (for sharp feature of x-points)

  • doPlot – plot geometric measurements

Returns

dictionary with geometric quantities

omfit_classes.fluxSurface.rz_miller(a, R, kappa=1.0, delta=0.0, zeta=0.0, zmag=0.0, poloidal_resolution=101)[source]

return R,Z coordinates for all flux surfaces from miller geometry coefficients in input.profiles file based on gacode/gapy/src/gapy_geo.f90

Parameters
  • a – minor radius

  • R – major radius

  • kappa – elongation

  • delta – triandularity

  • zeta – squareness

  • zmag – z offset

  • poloidal_resolution – integer with number of equispaced points in toroidal angle, or array of toroidal angles

Returns

1D arrays with (R, Z) flux surface coordinates

omfit_classes.fluxSurface.miller_derived(rmin, rmaj, kappa, delta, zeta, zmag, q)[source]

Originally adapted by A. Tema from FORTRAN of gacode/shared/GEO/GEO_do.f90

Parameters
  • rmin – minor radius

  • rmaj – major radius

  • kappa – elongation

  • delta – triangularity

  • zeta – squareness

  • zmag – z magnetic axis

  • q – safety factor

Returns

dictionary with volume, grad_r0, bp0, bt0