cwl_d_auto.v1_2

Generated by schema-salad code generator

Members

Aliases

DocumentRootType
alias DocumentRootType = Either!(CommandLineTool, ExpressionTool, Workflow, Operation)
importFromURI
alias importFromURI = import_!DocumentRootType

Classes

ArraySchema
class ArraySchema
CWLType
class CWLType

Extends primitive types with the concept of a file and directory as a builtin type. File: A File object Directory: A Directory object

CWLVersion
class CWLVersion

Version symbols for published CWL document versions.

CommandInputArraySchema
class CommandInputArraySchema
CommandInputEnumSchema
class CommandInputEnumSchema
CommandInputParameter
class CommandInputParameter

An input parameter for a CommandLineTool.

CommandInputRecordField
class CommandInputRecordField
CommandInputRecordSchema
class CommandInputRecordSchema
CommandLineBindable
class CommandLineBindable
CommandLineBinding
class CommandLineBinding

When listed under inputBinding in the input schema, the term "value" refers to the corresponding value in the input object. For binding objects listed in CommandLineTool.arguments, the term "value" refers to the effective value after evaluating valueFrom. The binding behavior when building the command line depends on the data type of the value. If there is a mismatch between the type described by the input schema and the effective value, such as resulting from an expression evaluation, an implementation must use the data type of the effective value. - **string**: Add prefix and the string to the command line. - **number**: Add prefix and decimal representation to command line. - **boolean**: If true, add prefix to the command line. If false, add nothing. - **File**: Add prefix and the value of `File.path` to the command line. - **Directory**: Add prefix and the value of `Directory.path` to the command line. - **array**: If itemSeparator is specified, add prefix and the join the array into a single string with itemSeparator separating the items. Otherwise, first add prefix, then recursively process individual elements. If the array is empty, it does not add anything to command line. - **object**: Add prefix only, and recursively add object fields for which inputBinding is specified. - **null**: Add nothing.

CommandLineTool
class CommandLineTool

This defines the schema of the CWL Command Line Tool Description document.

CommandOutputArraySchema
class CommandOutputArraySchema
CommandOutputBinding
class CommandOutputBinding

Describes how to generate an output parameter based on the files produced by a CommandLineTool. The output parameter value is generated by applying these operations in the following order: - glob - loadContents - outputEval - secondaryFiles

CommandOutputEnumSchema
class CommandOutputEnumSchema
CommandOutputParameter
class CommandOutputParameter

An output parameter for a CommandLineTool.

CommandOutputRecordField
class CommandOutputRecordField
CommandOutputRecordSchema
class CommandOutputRecordSchema
Directory
class Directory

Represents a directory to present to a command line tool. Directories are represented as objects with class of Directory. Directory objects have a number of properties that provide metadata about the directory. The location property of a Directory is a URI that uniquely identifies the directory. Implementations must support the file:// URI scheme and may support other schemes such as http://. Alternately to location, implementations must also accept the path property on Directory, which must be a filesystem path available on the same host as the CWL runner (for inputs) or the runtime environment of a command line tool execution (for command line tool outputs). A Directory object may have a listing field. This is a list of File and Directory objects that are contained in the Directory. For each entry in listing, the basename property defines the name of the File or Subdirectory when staged to disk. If listing is not provided, the implementation must have some way of fetching the Directory listing at runtime based on the location field. If a Directory does not have location, it is a Directory literal. A Directory literal must provide listing. Directory literals must be created on disk at runtime as needed. The resources in a Directory literal do not need to have any implied relationship in their location. For example, a Directory listing may contain two files located on different hosts. It is the responsibility of the runtime to ensure that those files are staged to disk appropriately. Secondary files associated with files in listing must also be staged to the same Directory. When executing a CommandLineTool, Directories must be recursively staged first and have local values of path assigned. Directory objects in CommandLineTool output must provide either a location URI or a path property in the context of the tool execution runtime (local to the compute node, or within the executing container). An ExpressionTool may forward file references from input to output by using the same value for location. Name conflicts (the same basename appearing multiple times in listing or in any entry in secondaryFiles in the listing) is a fatal error.

Dirent
class Dirent

Define a file or subdirectory that must be staged to a particular place prior to executing the command line tool. May be the result of executing an expression, such as building a configuration file from a template. Usually files are staged within the designated output directory. However, under certain circumstances, files may be staged at arbitrary locations, see discussion for entryname.

DockerRequirement
class DockerRequirement

Indicates that a workflow component should be run in a Docker or Docker-compatible (such as Singularity and udocker) container environment and specifies how to fetch or build the image. If a CommandLineTool lists DockerRequirement under hints (or requirements), it may (or must) be run in the specified Docker container. The platform must first acquire or install the correct Docker image as specified by dockerPull, dockerImport, dockerLoad or dockerFile. The platform must execute the tool in the container using docker run with the appropriate Docker image and tool command line. The workflow platform may provide input files and the designated output directory through the use of volume bind mounts. The platform should rewrite file paths in the input object to correspond to the Docker bind mounted locations. That is, the platform should rewrite values in the parameter context such as runtime.outdir, runtime.tmpdir and others to be valid paths within the container. The platform must ensure that runtime.outdir and runtime.tmpdir are distinct directories. When running a tool contained in Docker, the workflow platform must not assume anything about the contents of the Docker container, such as the presence or absence of specific software, except to assume that the generated command line represents a valid command within the runtime environment of the container. A container image may specify an ENTRYPOINT

and/or CMD. Command line arguments will be appended after all elements of ENTRYPOINT, and will override all elements specified using CMD (in other words, CMD is only used when the CommandLineTool definition produces an empty command line). Use of implicit ENTRYPOINT or CMD are discouraged due to reproducibility concerns of the implicit hidden execution point (For further discussion, see https://doi.org/10.12688/f1000research.15140.1). Portable CommandLineTool wrappers in which use of a container is optional must not rely on ENTRYPOINT or CMD. CommandLineTools which do rely on ENTRYPOINT or CMD must list DockerRequirement in the requirements section. ## Interaction with other requirements If EnvVarRequirement is specified alongside a DockerRequirement, the environment variables must be provided to Docker using --env or --env-file and interact with the container's preexisting environment as defined by Docker.

EnumSchema
class EnumSchema

Define an enumerated type.

EnvVarRequirement
class EnvVarRequirement

Define a list of environment variables which will be set in the execution environment of the tool. See EnvironmentDef for details.

EnvironmentDef
class EnvironmentDef

Define an environment variable that will be set in the runtime environment by the workflow platform when executing the command line tool. May be the result of executing an expression, such as getting a parameter from input.

ExpressionTool
class ExpressionTool

An ExpressionTool is a type of Process object that can be run by itself or as a Workflow step. It executes a pure Javascript expression that has access to the same input parameters as a workflow. It is meant to be used sparingly as a way to isolate complex Javascript expressions that need to operate on input data and produce some result; perhaps just a rearrangement of the inputs. No Docker software container is required or allowed.

ExpressionToolOutputParameter
class ExpressionToolOutputParameter
File
class File

Represents a file (or group of files when secondaryFiles is provided) that will be accessible by tools using standard POSIX file system call API such as open(2) and read(2). Files are represented as objects with class of File. File objects have a number of properties that provide metadata about the file. The location property of a File is a URI that uniquely identifies the file. Implementations must support the file:// URI scheme and may support other schemes such as http:// and https://. The value of location may also be a relative reference, in which case it must be resolved relative to the URI of the document it appears in. Alternately to location, implementations must also accept the path property on File, which must be a filesystem path available on the same host as the CWL runner (for inputs) or the runtime environment of a command line tool execution (for command line tool outputs). If no location or path is specified, a file object must specify contents with the UTF-8 text content of the file. This is a "file literal". File literals do not correspond to external resources, but are created on disk with contents with when needed for executing a tool. Where appropriate, expressions can return file literals to define new files on a runtime. The maximum size of contents is 64 kilobytes. The basename property defines the filename on disk where the file is staged. This may differ from the resource name. If not provided, basename must be computed from the last path part of location and made available to expressions. The secondaryFiles property is a list of File or Directory objects that must be staged in the same directory as the primary file. It is an error for file names to be duplicated in secondaryFiles. The size property is the size in bytes of the File. It must be computed from the resource and made available to expressions. The checksum field contains a cryptographic hash of the file content for use it verifying file contents. Implementations may, at user option, enable or disable computation of the checksum field for performance or other reasons. However, the ability to compute output checksums is required to pass the CWL conformance test suite. When executing a CommandLineTool, the files and secondary files may be staged to an arbitrary directory, but must use the value of basename for the filename. The path property must be file path in the context of the tool execution runtime (local to the compute node, or within the executing container). All computed properties should be available to expressions. File literals also must be staged and path must be set. When collecting CommandLineTool outputs, glob matching returns file paths (with the path property) and the derived properties. This can all be modified by outputEval. Alternately, if the file cwl.output.json is present in the output, outputBinding is ignored. File objects in the output must provide either a location URI or a path property in the context of the tool execution runtime (local to the compute node, or within the executing container). When evaluating an ExpressionTool, file objects must be referenced via location (the expression tool does not have access to files on disk so path is meaningless) or as file literals. It is legal to return a file object with an existing location but a different basename. The loadContents field of ExpressionTool inputs behaves the same as on CommandLineTool inputs, however it is not meaningful on the outputs. An ExpressionTool may forward file references from input to output by using the same value for location.

InitialWorkDirRequirement
class InitialWorkDirRequirement

Define a list of files and subdirectories that must be staged by the workflow platform prior to executing the command line tool. Normally files are staged within the designated output directory. However, when running inside containers, files may be staged at arbitrary locations, see discussion for `Dirent.entryname`. Together with DockerRequirement.dockerOutputDirectory it is possible to control the locations of both input and output files when running in containers.

InlineJavascriptRequirement
class InlineJavascriptRequirement

Indicates that the workflow platform must support inline Javascript expressions. If this requirement is not present, the workflow platform must not perform expression interpolation.

InplaceUpdateRequirement
class InplaceUpdateRequirement

If inplaceUpdate is true, then an implementation supporting this feature may permit tools to directly update files with `writable: true` in InitialWorkDirRequirement. That is, as an optimization, files may be destructively modified in place as opposed to copied and updated. An implementation must ensure that only one workflow step may access a writable file at a time. It is an error if a file which is writable by one workflow step file is accessed (for reading or writing) by any other workflow step running independently. However, a file which has been updated in a previous completed step may be used as input to multiple steps, provided it is read-only in every step. Workflow steps which modify a file must produce the modified file as output. Downstream steps which further process the file must use the output of previous steps, and not refer to a common input (this is necessary for both ordering and correctness). Workflow authors should provide this in the hints section. The intent of this feature is that workflows produce the same results whether or not InplaceUpdateRequirement is supported by the implementation, and this feature is primarily available as an optimization for particular environments. Users and implementers should be aware that workflows that destructively modify inputs may not be repeatable or reproducible. In particular, enabling this feature implies that WorkReuse should not be enabled.

InputArraySchema
class InputArraySchema
InputBinding
class InputBinding
InputEnumSchema
class InputEnumSchema
InputRecordField
class InputRecordField
InputRecordSchema
class InputRecordSchema
LinkMergeMethod
class LinkMergeMethod

The input link merge method, described in WorkflowStepInput.

LoadListingEnum
class LoadListingEnum

Specify the desired behavior for loading the listing field of a Directory object for use by expressions.

LoadListingRequirement
class LoadListingRequirement

Specify the desired behavior for loading the listing field of a Directory object for use by expressions.

MultipleInputFeatureRequirement
class MultipleInputFeatureRequirement

Indicates that the workflow platform must support multiple inbound data links listed in the source field of WorkflowStepInput.

NetworkAccess
class NetworkAccess

Indicate whether a process requires outgoing IPv4/IPv6 network access. Choice of IPv4 or IPv6 is implementation and site specific, correct tools must support both. If networkAccess is false or not specified, tools must not assume network access, except for localhost (the loopback device). If networkAccess is true, the tool must be able to make outgoing connections to network resources. Resources may be on a private subnet or the public Internet. However, implementations and sites may apply their own security policies to restrict what is accessible by the tool. Enabling network access does not imply a publicly routable IP address or the ability to accept inbound connections.

Operation
class Operation

This record describes an abstract operation. It is a potential step of a workflow that has not yet been bound to a concrete implementation. It specifies an input and output signature, but does not provide enough information to be executed. An implementation (or other tooling) may provide a means of binding an Operation to a concrete process (such as Workflow, CommandLineTool, or ExpressionTool) with a compatible signature.

OperationInputParameter
class OperationInputParameter

Describe an input parameter of an operation.

OperationOutputParameter
class OperationOutputParameter

Describe an output parameter of an operation.

OutputArraySchema
class OutputArraySchema
OutputEnumSchema
class OutputEnumSchema
OutputRecordField
class OutputRecordField
OutputRecordSchema
class OutputRecordSchema
PickValueMethod
class PickValueMethod

Picking non-null values among inbound data links, described in WorkflowStepInput.

PrimitiveType
class PrimitiveType

Salad data types are based on Avro schema declarations. Refer to the Avro schema declaration documentation for detailed information. null: no value boolean: a binary value int: 32-bit signed integer long: 64-bit signed integer float: single precision (32-bit) IEEE 754 floating-point number double: double precision (64-bit) IEEE 754 floating-point number string: Unicode character sequence

RecordField
class RecordField

A field of a record.

RecordSchema
class RecordSchema
ResourceRequirement
class ResourceRequirement

Specify basic hardware resource requirements. "min" is the minimum amount of a resource that must be reserved to schedule a job. If "min" cannot be satisfied, the job should not be run. "max" is the maximum amount of a resource that the job shall be allocated. If a node has sufficient resources, multiple jobs may be scheduled on a single node provided each job's "max" resource requirements are met. If a job attempts to exceed its resource allocation, an implementation may deny additional resources, which may result in job failure. If both "min" and "max" are specified, an implementation may choose to allocate any amount between "min" and "max", with the actual allocation provided in the runtime object. If "min" is specified but "max" is not, then "max" == "min" If "max" is specified by "min" is not, then "min" == "max". It is an error if max < min. It is an error if the value of any of these fields is negative. If neither "min" nor "max" is specified for a resource, use the default values below.

ScatterFeatureRequirement
class ScatterFeatureRequirement

Indicates that the workflow platform must support the scatter and scatterMethod fields of WorkflowStep.

ScatterMethod
class ScatterMethod

The scatter method, as described in workflow step scatter.

SchemaDefRequirement
class SchemaDefRequirement

This field consists of an array of type definitions which must be used when interpreting the inputs and outputs fields. When a type field contains a IRI, the implementation must check if the type is defined in schemaDefs and use that definition. If the type is not found in schemaDefs, it is an error. The entries in schemaDefs must be processed in the order listed such that later schema definitions may refer to earlier schema definitions. - **Type definitions are allowed for enum and record types only.** - Type definitions may be shared by defining them in a file and then $include-ing them in the types field. - A file can contain a list of type definitions

SecondaryFileSchema
class SecondaryFileSchema

Secondary files are specified using the following micro-DSL for secondary files: * If the value is a string, it is transformed to an object with two fields pattern and required * By default, the value of required is null (this indicates default behavior, which may be based on the context) * If the value ends with a question mark ? the question mark is stripped off and the value of the field required is set to False * The remaining value is assigned to the field pattern For implementation details and examples, please see this section

in the Schema Salad specification.

ShellCommandRequirement
class ShellCommandRequirement

Modify the behavior of CommandLineTool to generate a single string containing a shell command line. Each item in the arguments list must be joined into a string separated by single spaces and quoted to prevent intepretation by the shell, unless CommandLineBinding for that argument contains shellQuote: false. If shellQuote: false is specified, the argument is joined into the command string without quoting, which allows the use of shell metacharacters such as | for pipes.

SoftwarePackage
class SoftwarePackage
SoftwareRequirement
class SoftwareRequirement

A list of software packages that should be configured in the environment of the defined process.

StepInputExpressionRequirement
class StepInputExpressionRequirement

Indicate that the workflow platform must support the valueFrom field of WorkflowStepInput.

SubworkflowFeatureRequirement
class SubworkflowFeatureRequirement

Indicates that the workflow platform must support nested workflows in the run field of WorkflowStep.

ToolTimeLimit
class ToolTimeLimit

Set an upper limit on the execution time of a CommandLineTool. A CommandLineTool whose execution duration exceeds the time limit may be preemptively terminated and considered failed. May also be used by batch systems to make scheduling decisions. The execution duration excludes external operations, such as staging of files, pulling a docker image etc, and only counts wall-time for the execution of the command line itself.

WorkReuse
class WorkReuse

For implementations that support reusing output from past work (on the assumption that same code and same input produce same results), control whether to enable or disable the reuse behavior for a particular tool or step (to accommodate situations where that assumption is incorrect). A reused step is not executed but instead returns the same output as the original execution. If WorkReuse is not specified, correct tools should assume it is enabled by default.

Workflow
class Workflow

A workflow describes a set of **steps** and the **dependencies** between those steps. When a step produces output that will be consumed by a second step, the first step is a dependency of the second step. When there is a dependency, the workflow engine must execute the preceding step and wait for it to successfully produce output before executing the dependent step. If two steps are defined in the workflow graph that are not directly or indirectly dependent, these steps are **independent**, and may execute in any order or execute concurrently. A workflow is complete when all steps have been executed. Dependencies between parameters are expressed using the source field on workflow step input parameters and outputSource field on [workflow output parameters](#WorkflowOutputParameter). The source field on each workflow step input parameter expresses the data links that contribute to the value of the step input parameter (the "sink"). A workflow step can only begin execution when every data link connected to a step has been fulfilled. The outputSource field on each workflow step input parameter expresses the data links that contribute to the value of the workflow output parameter (the "sink"). Workflow execution cannot complete successfully until every data link connected to an output parameter has been fulfilled. ## Workflow success and failure A completed step must result in one of success, temporaryFailure or permanentFailure states. An implementation may choose to retry a step execution which resulted in temporaryFailure. An implementation may choose to either continue running other steps of a workflow, or terminate immediately upon permanentFailure. * If any step of a workflow execution results in permanentFailure, then the workflow status is permanentFailure. * If one or more steps result in temporaryFailure and all other steps complete success or are not executed, then the workflow status is temporaryFailure. * If all workflow steps are executed and complete with success, then the workflow status is success. # Extensions ScatterFeatureRequirement and SubworkflowFeatureRequirement are available as standard extensions to core workflow semantics.

WorkflowInputParameter
class WorkflowInputParameter
WorkflowOutputParameter
class WorkflowOutputParameter

Describe an output parameter of a workflow. The parameter must be connected to one or more parameters defined in the workflow that will provide the value of the output parameter. It is legal to connect a WorkflowInputParameter to a WorkflowOutputParameter. See WorkflowStepInput for discussion of linkMerge and pickValue.

WorkflowStep
class WorkflowStep

A workflow step is an executable element of a workflow. It specifies the underlying process implementation (such as CommandLineTool or another Workflow) in the run field and connects the input and output parameters of the underlying process to workflow parameters. # Scatter/gather To use scatter/gather, ScatterFeatureRequirement must be specified in the workflow or workflow step requirements. A "scatter" operation specifies that the associated workflow step or subworkflow should execute separately over a list of input elements. Each job making up a scatter operation is independent and may be executed concurrently. The scatter field specifies one or more input parameters which will be scattered. An input parameter may be listed more than once. The declared type of each input parameter is implicitly becomes an array of items of the input parameter type. If a parameter is listed more than once, it becomes a nested array. As a result, upstream parameters which are connected to scattered parameters must be arrays. All output parameter types are also implicitly wrapped in arrays. Each job in the scatter results in an entry in the output array. If any scattered parameter runtime value is an empty array, all outputs are set to empty arrays and no work is done for the step, according to applicable scattering rules. If scatter declares more than one input parameter, scatterMethod describes how to decompose the input into a discrete set of jobs. * **dotproduct** specifies that each of the input arrays are aligned and one element taken from each array to construct each job. It is an error if all input arrays are not the same length. * **nested_crossproduct** specifies the Cartesian product of the inputs, producing a job for every combination of the scattered inputs. The output must be nested arrays for each level of scattering, in the order that the input arrays are listed in the scatter field. * **flat_crossproduct** specifies the Cartesian product of the inputs, producing a job for every combination of the scattered inputs. The output arrays must be flattened to a single level, but otherwise listed in the order that the input arrays are listed in the scatter field. # Conditional execution (Optional) Conditional execution makes execution of a step conditional on an expression. A step that is not executed is "skipped". A skipped step produces null for all output parameters. The condition is evaluated after scatter, using the input object of each individual scatter job. This means over a set of scatter jobs, some may be executed and some may be skipped. When the results are gathered, skipped steps must be null in the output arrays. The when field controls conditional execution. This is an expression that must be evaluated with inputs bound to the step input object (or individual scatter job), and returns a boolean value. It is an error if this expression returns a value other than true or false. Conditionals in CWL are an optional feature and are not required to be implemented by all consumers of CWL documents. An implementation that does not support conditionals must return a fatal error when attempting to execute a workflow that uses conditional constructs the implementation does not support. # Subworkflows To specify a nested workflow as part of a workflow step, SubworkflowFeatureRequirement must be specified in the workflow or workflow step requirements. It is a fatal error if a workflow directly or indirectly invokes itself as a subworkflow (recursive workflows are not allowed).

WorkflowStepInput
class WorkflowStepInput

The input of a workflow step connects an upstream parameter (from the workflow inputs, or the outputs of other workflows steps) with the input parameters of the process specified by the run field. Only input parameters declared by the target process will be passed through at runtime to the process though additional parameters may be specified (for use within valueFrom expressions for instance) - unconnected or unused parameters do not represent an error condition. # Input object A WorkflowStepInput object must contain an id field in the form #fieldname or #prefix/fieldname. When the id field contains a slash / the field name consists of the characters following the final slash (the prefix portion may contain one or more slashes to indicate scope). This defines a field of the workflow step input object with the value of the source parameter(s). # Merging multiple inbound data links To merge multiple inbound data links, MultipleInputFeatureRequirement must be specified in the workflow or workflow step requirements. If the sink parameter is an array, or named in a [workflow scatter](#WorkflowStep) operation, there may be multiple inbound data links listed in the source field. The values from the input links are merged depending on the method specified in the linkMerge field. If both linkMerge and pickValue are null or not specified, and there is more than one element in the source array, the default method is "merge_nested". If both linkMerge and pickValue are null or not specified, and there is only a single element in the source, then the input parameter takes the scalar value from the single input link (it is *not* wrapped in a single-list). * **merge_nested** The input must be an array consisting of exactly one entry for each input link. If "merge_nested" is specified with a single link, the value from the link must be wrapped in a single-item list. * **merge_flattened** 1. The source and sink parameters must be compatible types, or the source type must be compatible with single element from the "items" type of the destination array parameter. 2. Source parameters which are arrays are concatenated. Source parameters which are single element types are appended as single elements. # Picking non-null values among inbound data links If present, pickValue specifies how to pick non-null values among inbound data links. pickValue is evaluated 1. Once all source values from upstream step or parameters are available. 2. After linkMerge. 3. Before scatter or valueFrom. This is specifically intended to be useful in combination with conditional execution, where several upstream steps may be connected to a single input (source is a list), and skipped steps produce null values. Static type checkers should check for type consistency after inferring what the type will be after pickValue is applied, just as they do currently for linkMerge. * **first_non_null** For the first level of a list input, pick the first non-null element. The result is a scalar. It is an error if there is no non-null element. Examples: * [null, x, null, y] -> x * [null, [null], null, y] -> [null] * [null, null, null] -> Runtime Error *Intended use case*: If-else pattern where the value comes either from a conditional step or from a default or fallback value. The conditional step(s) should be placed first in the list. * **the_only_non_null** For the first level of a list input, pick the single non-null element. The result is a scalar. It is an error if there is more than one non-null element. Examples: * [null, x, null] -> x * [null, x, null, y] -> Runtime Error * [null, [null], null] -> [null] * [null, null, null] -> Runtime Error *Intended use case*: Switch type patterns where developer considers more than one active code path as a workflow error (possibly indicating an error in writing when condition expressions). * **all_non_null** For the first level of a list input, pick all non-null values. The result is a list, which may be empty. Examples: * [null, x, null] -> [x] * [x, null, y] -> [x, y] * [null, [x], [null]] -> [[x], [null]] * [null, null, null] -> [] *Intended use case*: It is valid to have more than one source, but sources are conditional, so null sources (from skipped steps) should be filtered out.

WorkflowStepOutput
class WorkflowStepOutput

Associate an output parameter of the underlying process with a workflow parameter. The workflow parameter (given in the id field) be may be used as a source to connect with input parameters of other workflow steps, or with an output parameter of the process. A unique identifier for this workflow output parameter. This is the identifier to use in the source field of WorkflowStepInput to connect the output value to downstream parameters.

stderr
class stderr

Only valid as a type for a CommandLineTool output with no outputBinding set. The following

stdin
class stdin

Only valid as a type for a CommandLineTool input with no inputBinding set. stdin must not be specified at the CommandLineTool level. The following

stdout
class stdout

Only valid as a type for a CommandLineTool output with no outputBinding set. The following

Imports

Any (from salad.primitives)
public import salad.primitives : Any;
Expression (from salad.primitives)
public import salad.primitives : Expression;

Manifest constants

parserInfo
enum parserInfo;

parser information

Meta

Date

Date: 2022-11-10