The MemCom data manager permits to store rather generic data, therefore it is necessary to use certain conventions to organize the data for a particular purpose. For the purpose of databases for unstructured CFD solvers, the conventions are called "the unstructured-hybrid database format". This format is quite straightforward and permits very efficient input and output.
Why are these conventions so important? The graphical post-processor baspl++ relies on them to locate geometry data, field data, etc. on the MemCom database. In principle, baspl++ is quite flexible, it can read CSM databases (linear, non-linear, transient, etc.), CFD databases (structured, unstructured, steady, unsteady, etc.) as well as a variety of MemCom databases from other disciplines (electromagnetic, heat-transfer, etc.).
We illustrate this using a basic example, using the Python-Interface to the MemCom data manager: A cube consisting of 4x4x4 hexahedral elements. The lower face of the cube is additionally meshed with 4x4 quadrilateral elements; these constitute the wall (similarly would be proceeded for the far-field and the symmetry plane(s)). We assume that the code will be placed in the file "example.py". The first lines of this script may look like this:
import math import sys import numpy import memcom # Check command-line arguments. if len(sys.argv) != 2: print 'usage: python example.py dbname' sys.exit(1) dbname = sys.argv[1] # Create a new MemCom database with that name. db = memcom.db(dbname, 'n')
In what follows, the different datasets required by baspl++ will be described. To make things easier for Python beginners, we adopt a simple (and thus less concise) coding style; for instance, we do not make use of list comprehensions and generator expressions.
In contrast to structured multi-block meshes, unstructured meshes are usually fully connected. In such a case, only a single branch is used (in baspl++, a branch is a subset of a mesh, in terms of structured multi-block this would be a single block).
When loading a database, baspl++
always first reads the ADIR
dataset to find out
which branches do exist. In this case, the ADIR dataset contains only
the number 1 (for branch 1).
The Python code to create the ADIR
dataset is
thus:
# Create the ADIR dataset. db['ADIR'] = [1]
After having read the ADIR
dataset,
baspl++ will read the
ELEMENT-PARAMETERS
dataset.
ELEMENT-PARAMETERS
describes which element
types (or rather cell shapes for Finite-Volume analysis) may be
present in the geometries. The ELEMENT-PARAMETERS
is an array of (first-order) relational tables. For Finite-Element
analysis, this dataset can be quite large, as there may be many
different element types present in an analysis case. Here, the
following will almost always be sufficient:
# Create the ELEMENT-PARAMETERS dataset for # unstructured-hybrid databases. db['ELEMENT-PARAMETERS'] = [ { 'NAME':'P', 'ELNO':1, }, { 'NAME':'L', 'ELNO':2, }, { 'NAME':'T', 'ELNO':3, }, { 'NAME':'Q', 'ELNO':4, }, { 'NAME':'TE', 'ELNO':5, }, { 'NAME':'PY', 'ELNO':6, }, { 'NAME':'PR', 'ELNO':7, }, { 'NAME':'HE', 'ELNO':8, }, ]
This dataset establishes a one-to-one relationship between the
element type names (as known to baspl++)
and the element type numbers for internal use (as used to enumerate
datasets that are specific to element types). Therefore for each
relational sub-table, there must be a NAME
parameter (containing the element type name) and a
ELNO
parameter, containing a unique cardinal
number.
baspl++ knows for instance, that "T"
means triangular elements and that "TE" means tetrahedral elements
(likewise "L" designates line elements, "Q" designates quadrilateral
elements, and "HE" designates hexahedral elements for instance). The
values for the ELNO
must correspond to the
NTYP
parameters in the branch description tables.
Thus, baspl++ knows now that the element
type 3 corresponds to triangular elements, and that element type 5
corresponds to tetrahedral elements.
The following list gives an overview of different element type families (a family is a set of elements with the same shape).
-
Point elements "P". They may represent concentrated masses etc.
-
Line elements "L". May be useful for 2D geometries.
-
Triangular elements "T".
-
Quadrilateral elements "Q".
-
Tetrahedral elements "TE".
-
Pyramidal elements "PY".
-
Prismatic elements "PR" and pentahedral elements "PE". The latter are also called wedge elements and can be regarded as degenerated hexahedral elements. Note that although the shape of prismatic elements are essentially the same as pentahedral elements, the nodes, edges, and face of prismatic elements are enumerated differently than those of pentahedral elements.
-
Hexahedral elements "HE".
In addition, elements may be of second order instead of first order. For instance, "T6" designates a second-order triangular elements (with 6 nodes), while "TE10" designates a second-order tetrahedral element (with 10 nodes).
The FIELDS
dataset is the third dataset that
will be read by baspl++. This dataset
contains the generic names and field types of the solution fields
present in the database. Like the
ELEMENT-PARAMETERS
dataset, it consists of an array
of relational sub-tables and is created in Python with the following
statement:
# Create the FIELDS dataset for typical CFD databases. db['FIELDS'] = [ { 'GNAME':'DENSITY', 'TYPE':'VERTEX', }, { 'GNAME':'ENTHALPY', 'TYPE':'VERTEX', }, { 'GNAME':'PRESSURE', 'TYPE':'VERTEX', }, { 'GNAME':'TEMPERATURE', 'TYPE':'VERTEX', }, { 'GNAME':'VELOCITY', 'TYPE':'VERTEX', }, { 'GNAME':'SKINFRICTION', 'TYPE':'VERTEX', }, { 'GNAME':'FORCE', 'TYPE':'VERTEX', 'DISCRETE':'YES', 'ADDITIVE':'YES', }, ]
More field types can be added. In this case, the list of dictionaries should be extended accordingly.
The meaning of the keys given for each field type is the following:
-
GNAME
: The generic name of the datasets containing the field values. This key is mandatory. For instance, forGNAME=DENS
, a possible dataset name would beDENS.3.75
. -
TYPE
: The type of field data, or more precisely, where the field data is located. For unstructured grids, the default valueVERTEX
almost always applies. -
DISCRETE
: Default isNO
. A value ofYES
indicates that the field does not have interpolatory character and is valid only at the support points. A typical example of a discrete field are concentrated nodal forces. -
ADDITIVE
: Only needed whenDISCRETE=YES
. Default isNO
. A value ofYES
indicates that for coinciding nodes, the values should be added up. Concentrated nodal forces are an example of a discrete additive field.
For each branch number present in the ADIR
dataset, baspl++ will attempt to load the
branch geometry. For this it looks up the branch description table,
which is a (first-order) relational dataset. Since for the
unstructured-hybrid format, there is only a single branch, there is
only a single dataset, which is called
BDTB.1
.
In our example, the branch description table dataset can be created like this:
# Create the branch-description table. db['BDTB.1'] = { 'MESH':'UH', 'ELNO':[4, 8], }
The MESH
parameter indicates the mesh type.
baspl++ supports different types of meshes
(per branch), such as point-cloud meshes ("P"), unstructured
B2000++ meshes ("U"), structured CFD meshes
("S"), and unstructured hybrid (CFD and CSM) meshes ("UH"). Thus, the
value for MESH
is always "UH".
When MESH="UH"
,
baspl++ expects the additional parameter
ELNO
, which is a list of element type identifiers.
Each number in this list corresponds to the ELNO
parameter in the ELEMENT-PARAMETERS
dataset (thus
the value of 4 designates the element name "Q", and the value of 8
designates the element name "HE".
![]() |
Note |
---|---|
For earlier versions of baspl++, it
was necessary to specify also the keys |
Once the branch description table has been read, baspl++ will attempt to read the coordinates for that branch. Since for the unstructured-hybrid format, there is only a single branch, there is only a single dataset, which is called COOR.1.
COOR is a two dimensional dataset, where the number of columns is 3 and the number of rows is the number of nodes for the branch.
In our example, the coordinates dataset can be created like this:
coor = [] for k in range(5): for j in range(5): for i in range(5): coor.append([float(i), float(j), float(k)]) db['COOR.1'] = coor
Once the branch description table and the nodal coordinates have
been read, baspl++ will attempt to read the
element connectivities as indicated by the contents of the
BDTB
dataset. To this end, it loops over the values
of the ELNO key, from which it gets the internal
element type number. The name of the dataset is then defined by
NODS.branch.0.elno.0
That is, if we
have branch 1, the quadrilateral element connectivity dataset is
NODS.1.0.4.0
.
These are two-dimensional integer positional datasets. Each row of a NODS dataset contains the nodal connectivities for a single element. Hence, the number of rows defines the number of elements for this type. The number of columns defines the number of nodes per element for the respective type. Thus, NODS.1.0.4.0 has four columns and NODS.1.0.8.0 has eight columns in our example. The node numbers start at 1, while in Python, array and list indices start at 0. The node numbers are checked by baspl++ for consistency with the total number of nodes for that branch.
In our example, the connectivity datasets can be created for instance like this:
# # Create the element connectivity datasets. # def get_node(i, j, k): return k * 5 * 5 + j * 5 + i + 1 # Surface connectivity (wall). These elements have the # element numbers 0-15. We omit the definition of # surface connectivity for the far field etc. snods = [] for j in range(4): for i in range(4): snods.append([ get_node(i + 0, j + 0, 0), get_node(i + 1, j + 0, 0), get_node(i + 1, j + 1, 0), get_node(i + 0, j + 1, 0), ]) db['NODS.1.0.4.0'] = snods # Volume connectivity. These elements have the # element numbers 16-79. vnods = [] for k in range(4): for j in range(4): for i in range(4): vnods.append([ get_node(i + 0, j + 0, k + 0), get_node(i + 1, j + 0, k + 0), get_node(i + 1, j + 1, k + 0), get_node(i + 0, j + 1, k + 0), get_node(i + 0, j + 0, k + 1), get_node(i + 1, j + 0, k + 1), get_node(i + 1, j + 1, k + 1), get_node(i + 0, j + 1, k + 1), ]) db['NODS.1.0.8.0'] = vnods
These datasets are not mandatory, but very useful. They can be used to categorize elements of the same type into different groups. If these datasets are present, baspl++ will read them and is able to select elements on a group/panel basis.
The datasets are named like the NODS
datasets. Each dataset has n
rows and one column,
where n
is the number of elements for that element
type. Each row contains the group ELGR and panel
(ELPA) code respectively for the corresponding
element, with 0 being the default code.
In our example, we are interested only in marking the wall with the element group code 1. This can be achieved for instance with the following lines of code:
# Create the element-group datasets for the wall elements. elgr = [] for i in range(4 * 4): elgr.append(1) db['ELGR.1.0.4.0'] = elgr
These datasets are enumerated
VELOCITY.branch.cycle
and are two-dimensional datasets with 3 columns and the number of rows equal to the number of elements.
# Construct an artificial velocity field (for # demonstration purposes). velo = [] for k in range(5): for j in range(5): for i in range(5): velo.append([1.0 + float(i), 1.0 + float(j), 1.0 + float(k)]) cycle = 1 db['VELOCITY.1.%d' % cycle] = velo
If only values at the surface nodes should be stored in this dataset, it must be of the indexed type: Instead of 3 columns, we use 4 columns, the first being the element index (starting from 1), and we mark the dataset in the descriptor as indexed.
# Select only those nodes belonging to the surface # elements. For this, we re-use the snods variable defined above. Change # the node indices such that the numbers start at 0 and sort them. sel = set() for conn in snods: sel.update(conn) lsel = [] for n in sel: lsel.append(n - 1) sel = sorted(lsel) # Select velocities from those nodes belonging to the surface and # create a list of (index, velo-x, velo-y, velo-z) tuples. For # MemCom, the indices must start at 1. ivelo = [] for n in sel: ivelo.append([n + 1], velo[n][0], velo[n][1], velo[n][2]) # Save the dataset and mark it as indexed. dsname = 'VELOCITY.1.%d' % cycle db[dsname] = ivelo db[dsname].desc['INDEXED'] = 'YES'
![]() |
Note |
---|---|
Using a generator expression and a list comprehension, the 10
lines of code for obtaining the sel = set(n - 1 for conn in snods for n in conn) ivelo = [[n + 1] + velo[n] for n in sel] |
These datasets are treated in the same way as the datasets for the velocity of the solution field. Usually, there are the following datasets additionally:
DENSITY.branch.cycle ENTHALPY.branch.cycle PRESSURE.branch.cycle TEMPERATURE.branch.cycle
These datasets have only one component, thus only one column. Depending on the type of solver and the type of analysis performed, these dataset may or may not be present.
Other fields not described here can be defined as well. Here too, it is necessary to add an entry in the FIELDS dataset, where the type of field etc. is described.
Datasets containing the skin-friction factors are defined on the wall surface. Here, we assume that they are defined at the nodes being part of the wall surface.
# Construct an artificial skin-friction factor field (for # demonstration purposes). The skin friction factors are # to be given as a vector field (tangential to the surface). # Here, the surface is flat, thus the direction of the # tangent vector is constant. We re-use the 'sel' variable # defined above. tangent = numpy.array([1.0, 0.0, 0.0]) # Select velocities from those nodes belonging to the surface and # create a list of (index, sf-x, sf-y, sf-z) tuples. For # MemCom, the indices must start at 1. isf = [] for n in sel: isf.append([n + 1], tangent[0], tangent[1], tangent[2]) # Save the dataset and mark it as indexed. dsname = 'SKINFRICTION.1.%d' % cycle db[dsname] = isf db[dsname].desc['INDEXED'] = 'YES'