Skip to content

[#2143] Improve the performance for inline mangled modules pass

Motivation and Context

The inline mangled modules pass is a bit slow, because we map every core AST type and every typed AST type to replace mangled module names with the first module alias name. This MR tries to address some of this slowness.

My benchmarks on checker-ligo's main.mligo file indicate that inline mangled modules pass previously took 1.78G cycles, and now it takes 1.09G cycles.

Related issues

Part of #2143.

Checklist for the LIGO Language Server

  • I checked whether I need to update the README.md file for the plugin and did so if necessary:
    • If I implemented a new LSP request, I added it to the list of supported features that may be disabled
    • If I implemented a new LSP method, I added it to the list of supported functionality
  • I checked that my changes work in Emacs, Vim, and Visual Studio Code
  • (Before merging) The commit history is squashed and prettified, and follows the Serokell commit policy, or the MR is set to squash the commits

Description

The first optimization is to not map core types, since those will not have mangled names, as far as I can see. Those mangled names will come from inferred constructors and records.

The second optimization is to add a cache that maps mangled Module_var.ts to inlined Module_var.ts to avoid calling mvar_to_id and id_to_mvar which turned out to be expensive.

I think it should be possible to add a cache on types instead so that we don't map types if they are in the cache in the first place. However, that posed many problems: curried functions becoming uncurried in JsLIGO, function names missing in JsLIGO, types with the same names getting swapped, etc. I believe this is likely due to [@hash.ignore] (and equal and compare) that are used in various places for the typed AST, so this idea didn't quite work. But turns out, that even by my benchmarks, this didn't help save time. I don't know the reason, but I guess that comparing/hashing these trees was expensive.

The "real" solution that would completely avoid performance drops would be to make a breaking change in LIGO, by forbidding constructors and records to be added to the global environment, however, for various reasons we decided to not do that. See: https://tezos-dev.slack.com/archives/GKH15NM8W/p1711989500607219?thread_ts=1710979375.330029&cid=GKH15NM8W

My benchmarks on checker-ligo's main.mligo file indicate that inline mangled modules pass previously took 1.78G cycles, and now it takes 1.09G cycles.

Component

  • compiler
  • website
  • webide
  • vscode-plugin
  • debugger

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Performance improvement (non-breaking change that improves performance)
  • None (change with no changelog)

Changelog

Small performance boost for the LSP. The time between keystrokes will decrease.

Checklist:

  • Changes follow the existing coding style (use dune @fmt to check).
  • Tests for the changes have been added (for bug fixes / feature).
  • Documentation has been updated.
  • Changelog description has been added (if appropriate).
  • Start titles under ## Changelog section with #### (if appropriate).
  • There is no image or uploaded file in changelog
  • Examples in changed behaviour have been added to the changelog (for breaking change / feature).
Edited by Heitor Toledo Lassarote de Paula

Merge request reports