Skip to content

Commit

Permalink
Removes obsolete "pipeline"
Browse files Browse the repository at this point in the history
  • Loading branch information
netj committed Jan 10, 2017
1 parent 52ea17f commit 053f5e1
Show file tree
Hide file tree
Showing 9 changed files with 4 additions and 28 deletions.
2 changes: 1 addition & 1 deletion compiler/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ Here's a brief summary of how the compilation is done.
The HOCON syntax used by `deepdive.conf` is interpreted by `hocon2json` and everything is converted into a single JSON config object that holds everything under the key "deepdive".

2. The config object is first extended with some implied extractors, such as initializing the database and loading input tables.
Then, the dependencies of extractors, factors, pipelines are normalized, and their names are qualified with corresponding prefixes (by `compile-config_normalized`) to make it easier and clearer to produce the final code for execution.
Then, the dependencies of extractors are normalized, and their names are qualified with corresponding prefixes (by `compile-config_normalized`) to make it easier and clearer to produce the final code for execution.
DeepDive's built-in processes for variables and factors, such as grounding, learning, inference, and calibration, are added to the config object after the normalization.
User's original config is kept intact under "deepdive" while the normalized one is created under a different key, "deepdive_".

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,8 +18,6 @@ $deepdive.execution.processes | to_entries[] |
[ .dependencies_[]?
| select(
(startswith("process/") and in($deepdive.execution.processes)
# TODO factor/ aren't really used, so remove?
or startswith("factor/") and (ltrimstr("factor/") | in($deepdive.inference.factors_byName))
or startswith("data/") and (ltrimstr("data/") | in($deepdive.schema.relations)
# XXX assume user is doing the right thing if schema.json is empty
or ($deepdive.schema.relations | length == 0))
Expand Down
7 changes: 0 additions & 7 deletions compiler/compile-code/compile-code-Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,6 @@ def mktargets: mktargets("done");
# default commands
define CMD_data
endef
define CMD_pipeline
endef
define CMD_process
endef
define CMD_factor
Expand Down Expand Up @@ -57,11 +55,6 @@ reset: \(keys | mktargets("reset"))
$(RESET) \(.key | mktarget("done"))
"

# pipelines are special
, if .key | startswith("pipeline/") then "
.PHONY: \(.key | mktarget)
" else empty end

)

] | join("") }
1 change: 0 additions & 1 deletion compiler/compile-code/compile-code-dataflow_dot
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,6 @@ def nodeType: sub("/.*$"; "");
# edge attributes by [srcType][dstType]
(
{ "" : { "": "color=\"#999999\"" }
, pipeline : { "": "style=dotted arrowhead=odiamond" }
} * ($deepdiveDotConfig.edge_attrs // {})
) as $edge_attrs |

Expand Down
2 changes: 0 additions & 2 deletions compiler/compile-config/compile-config-0.00-init_objects
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,6 @@
| .deepdive_.extraction.extractors |= . + {}
| .deepdive_.inference |= . + {}
| .deepdive_.inference.factors |= . + {}
| .deepdive_.pipeline |= . + {}
| .deepdive_.pipeline.pipelines |= . + {}

# make sure our intermediate representation for execution plan set up
| .deepdive_.execution |= . + {}
Expand Down
12 changes: 0 additions & 12 deletions compiler/compile-config/compile-config-1.00-qualified_names
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,6 @@
#
# - extractor names are prefixed with process/*
# - factor names are prefixed with factor/*
# - pipeline names are prefixed with pipeline/*
# - output_relation names are prefixed with data/* and kept under a new key output_
# - dependencies are rewritten with qualified names under a new key dependencies_
##
Expand Down Expand Up @@ -31,16 +30,5 @@
)
)

# qualify names in pipelines
| .deepdive_.pipeline.pipelines |= with_entries
( .key as $p
| .key |= "pipeline/\(.)"
| .value |= map( if $deepdive.extraction.extractors[.] then "process/\(.)"
elif $deepdive.inference.factors[.] then "factor/\(.)"
else error("\(.): Neither an extractor or inference rule in pipeline \($p)")
end
)
)

# turn all extractors into processes in the execution plan under compilation
| .deepdive_.execution.processes += .deepdive_.extraction.extractors
2 changes: 1 addition & 1 deletion compiler/compile-config/compile-config-2.01-grounding
Original file line number Diff line number Diff line change
Expand Up @@ -702,7 +702,7 @@ def factorWeightDescriptionSqlExpr:

## from_grounding
# A nominal process to make it easy to redo the grounding
# TODO remove this once deepdive-do supports process groups or pipelines
# TODO remove this once deepdive-do supports process groups
| .deepdive_.execution.processes += {
"process/grounding/from_grounding": {
style: "cmd_extractor", cmd: ": no-op"
Expand Down
2 changes: 1 addition & 1 deletion compiler/deepdive-compile
Original file line number Diff line number Diff line change
Expand Up @@ -148,7 +148,7 @@ deepdive-check -a -c "$PWD"/config.json 'compiled_*' 2>&1 | sed 's/^/ /' >&2

###############################################################################
STEP "Compiling executable code into:"
# compile extractors and factors under process/ and factor/
# compile extractors under process/
pids=(--)
for cc in "$DEEPDIVE_HOME"/util/compile-code/compile-code-*; do
[[ -x "$cc" ]] || continue
Expand Down
2 changes: 1 addition & 1 deletion doc/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ deepdive {
}
```

In this template, the global section `deepdive` contains following major sections: `db`, `schema`, `extraction`, `inference`, `calibration`. Other optional sections are `sampler` and `pipeline`.
In this template, the global section `deepdive` contains following major sections: `db`, `schema`, `extraction`, `inference`, `calibration`. Other optional sections are `sampler` and `execution`.

Links to these sections:

Expand Down

0 comments on commit 053f5e1

Please sign in to comment.