AI to SVG interpretation/conversion architecture review
The current architecture of how to interpret the AI structure has its limits when it comes to duplication and structurally explicitly representing the mental model behind the AI document. This is an issue to review it, gather requirements and come up with something that is 'nice' = the architecture represents the model explicitly so that conversion code avoids duplication of responsibility.
Requirements
-
create a kind-of context to pass the interpretation to (Do the commands change an image/liveshape/path?) -
process AIElements that are located before the context indicator for the context. Example: We have style but it can belong to raster images and to paths. Is the clipping a joined raster image or a path clip? -
Still allow composition of parsers. Example: We can have one style parser for path style and for raster images -
Fallback: If we can not convert the live shape, resort to displaying the path -
Mapping: Many AIElements -> One Conversion -> Many SVG Elements
Possible path forward
- @joneuhauser please comment requirements.
- Come up with an architecture
- Implement the architecture and move existing code over for an example that displays the power:
- live shape vs. path
- path style vs. raster image style
- Migrate all code in the inkai/svg folder
- Celebrate and close this issue
😄
Architecture Proposal
(1) .ai file -> (2) extract (Done) -> (3) parse (Done) -> (4) Match patterns (new) -> (5) convert elements (in progress) -> (6) build SVG (in progress)
What is already there and not named:
- Context: Which element is the parent of the new SVG Element? Layer or SVG root? (6)
- Possible composition for style (5)
Related
Edited by Nicco Kunzmann