BTD: Unleashing the Power of Decompilation for x86 Deep Neural Network Executables

Uncover the power of decompilation for x86 deep neural network executables with BTD, a standalone binary format that executes directly without additional software, exposing underlying model architectures for potential exploitation.

Key takeaways
  • BTD (Binary Deep Neural Network Decompiler) is a standalone binary format of a deep neural network model that can be executed directly without the need for additional software.
  • Decomposition of a de-accedible does not require a comprehensive understanding of the underlying deep learning frameworks like PyTorch and TensorFlow.
  • Decompilation of a de-accedible is hard due to complex control flow and data flow, but can be achieved through symbolic execution and heuristics.
  • Deep learning compilers can optimize the memory layout, convert the four-dimensional arrays to segment x86 opcodes, and rearrange the layout.
  • Existing attacks against deep learning models can be categorized into white box attacks, black box attacks, and obscure DN models.
  • The power of decompilation lies in its ability to expose the underlying model architectures, which can be stolen or robbed.
  • Deep learning compilers can generate code that is harder to understand than the original code.
  • Decomposition of a de-accedible can be challenging due to the complexity of the generated code.
  • The dimensions of operators can be inferred through a set of heuristics and the summarized semantics of the convolution operator.
  • The gap between two input addresses can imply the dimension of the weights.
  • The attacker can run an unprivileged process on the same hardware and observe the data flow.
  • The assembly functions can be mapped to DN operators through a set of heuristics.
  • The standard library contains incremental input and output functions that pass inputs and outputs through function augmentations as memory pointers.