Normalization using re-analysis will change the index and task results of the program, causing them to be invalid until the non-normalized ASTs are analyzed again. This can be fixed by making the global index and task engine immutable and creating a new temporary index and task engine that contain the data from the global data structures, but modifications do not alter the global data structures. After normalization the temporary data structures are disposed and the global ones are restored.

Submitted by Gabriël Konat on 17 July 2013 at 01:55

On 17 July 2013 at 01:55 Gabriël Konat tagged @gohla

On 17 July 2013 at 07:07 Guido Wachsmuth commented:

We need to think about the interaction with incremental analysis and with incremental compilation here. You probably need the index data before normalisation to get incremental analysis right, but might need index data after normalisation to get incremental compilation right.

On 18 July 2013 at 01:04 Gabriël Konat commented:

Indeed, for compilation we need the index and task engine after normalization, for incremental analysis the index and task engine before normalization. In that case we should keep the ‘temporary’ index and task engine I was talking about available all the time and not throw it away after normalization. We should probably name these so that we can easily switch between them, like index-setup(|language, project-path, subindex-name) and task-setup(|project-path, subtaskengine-name).

On 18 July 2013 at 01:28 Guido Wachsmuth commented:

This might even be a deeper issue. Later phases might also store data in the index. A general solution might involve phased index data, where changes trigger re-evaluation of phases. This would require for each data an associated phase. This is not the phase that produces the data, but that depends on it.

On 18 July 2013 at 01:38 Gabriël Konat commented:

Yep, a reactive pipeline. Although the re-evaluation of phases in the pipeline is a different issue I think.

On 18 July 2013 at 01:54 Guido Wachsmuth commented:

Right, the re-evaluation is a different issue. But the association with phases might be prepared here. Currently, we have the analysis phase, which produces data, which triggers re-evaluation of tasks created during this analysis. Then, there is a normalisation phase, where re-evaluation is triggered by changes in the analysis data.

On 18 July 2013 at 02:44 Gabriël Konat commented:

Some example pipelines:

Editor (incremental analysis) (-> Desugaring) -> Normalization -> Compilation.
Editor -> Completion. (Incremental completion ;)
Editor -> Refactoring -> Refactoring is thrown away when invalid, or merged into Editor when valid.
Editor -> Split to multiple for parallel analysis -> Merged back into Editor.

On 23 July 2013 at 20:00 Gabriël Konat commented:

Closing this since I have a solution for the normalization issue. We should create new YG issues for the other problems.

On 23 July 2013 at 20:00 Gabriël Konat closed this issue.

Log in to post comments