Commit 815bc30d authored by Thanassis Tsiodras's avatar Thanassis Tsiodras
Browse files

Moving to Python3, for good.

parents f205b620 ddd0bdf8
[flake8]
ignore = E501,E225,C103
max-line-length = 160
......@@ -4,3 +4,6 @@
tests-coverage/output
tests-coverage/smp2.asn
tests-coverage/datatypessimulink.cat
tests-coverage/datatypessimulink.pkg
tests-coverage/Simulink_DataView_asn.m
*swp
# Change Log
## 2.1.0
- Major update of the PySide B mapper (API updates)
- Bug fix in the Python A mapper
## 2.0.0
- Moved to Python3
- Added type annotations, checked via mypy
- Added git commit hooks to check via flake8 AND pylint
- Configuration files used to customize the checks for the project's
naming conventions.
- Many bugs identified and fixed.
## 1.2.3
- ctypes backend: emit all ENUMERATED values in DV.py
## 1.2.0
- Not using SWIG anymore for the Python mappers
## 1.1.2
- pyside_b_mapper: added combo box for the asn1 value editor
## 1.1.1
- support empty SEQUENCEs (MyType ::= SEQUENCE {}) - except for Simulink
PY_SRC:=$(wildcard dmt/asn2dataModel.py dmt/aadl2glueC.py dmt/smp2asn.py dmt/*mappers/[a-zA-Z]*py dmt/commonPy/[a-zA-Z]*py)
PY_SRC:=$(filter-out dmt/B_mappers/antlr.main.py dmt/A_mappers/Stubs.py, ${PY_SRC})
# Python3.5 includes an older version of typing, which by default has priority over
# the one installed in $HOME/.local via setup.py.
#
# To address this, we find where our pip-installed typing lives:
TYPING_FOLDER:=$(shell pip3 show typing | grep ^Location | sed 's,^.*: ,,')
export PYTHONPATH=${TYPING_FOLDER}
all: flake8 pylint mypy coverage
flake8:
@echo Performing syntax checks via flake8...
@flake8 ${PY_SRC} || exit 1
pylint:
@echo Performing static analysis via pylint...
@pylint --disable=I --rcfile=pylint.cfg ${PY_SRC} | grep -v '^$$' | sed -n '/^Report/q;p'
mypy:
@echo Performing type analysis via mypy...
@mypy --disallow-untyped-defs --check-untyped-defs ${PY_SRC} || exit 1
coverage:
@echo Performing coverage checks...
@$(MAKE) -C tests-coverage || exit 1
.PHONY: flake8 pylint mypy coverage
TASTE Data Modelling Technologies
=================================
[![Build and Test Status of Data Modelling Tools on Circle CI](https://circleci.com/gh/ttsiodras/DataModellingTools.svg?&style=shield&circle-token=9df10d36b6b4ccd923415a5890155b7bf54b95c5)](https://circleci.com/gh/ttsiodras/DataModellingTools/tree/master)
TASTE Data Modelling Tools
==========================
These are the tools used by the European Space Agency's [TASTE toolchain](https://taste.tuxfamily.org/)
to automate handling of the Data Modelling. They include more than two
dozen codegenerators that automatically create the 'glue'; the run-time translation
bridges that allow code generated by modelling tools (Simulink, SCADE, OpenGeode, etc)
to "speak" to one another, via ASN.1 marshalling.
For the encoders and decoders of the messages
themselves, TASTE uses [ASN1SCC](https://github.com/ttsiodras/asn1scc) - an ASN.1
compiler specifically engineered for safety-critical environments.
For more details, visit the [TASTE site](https://taste.tuxfamily.org/).
Installation
------------
For using the tools, this should suffice:
$ sudo apt-get install libxslt1-dev libxml2-dev zlib1g-dev python3-pip
$ pip3 install --user --upgrade .
For developing the tools, the packaged Makefile allow for easy static-analysis
via the dominant Python static analyzers and syntax checkers:
$ make flake8 # check for pep8 compliance
$ make pylint # static analysis with pylint
$ make mypy # type analysis with mypy
Contents
--------
What is packaged:
- **commonPy** (*library*)
Contains the basic API for parsing ASN.1 (via invocation of
[ASN1SCC](https://github.com/ttsiodras/asn1scc) and simplification
of the generated XML AST representation to the Python classes
inside `asnAST.py`.
inside `asnAST.py`. The class diagram with the AST classes
is [packaged in the code](dmt/commonPy/asnAST.py#L42).
- **asn2aadlPlus** (*utility*)
Converts the type declarations inside ASN.1 grammars to AADL
declarations (used by the Ellidiss GUI to create the final systems)
declarations, that are used by [Ocarina](https://github.com/OpenAADL/ocarina)
to generate the executable containers.
- **asn2dataModel** (*utility*)
Reads the ASN.1 specification of the exchanged messages, and generates
the semantically equivalent Modeling tool/Modeling language declarations
(e.g. SCADE/Lustre, Matlab/Simulink statements, etc).
(e.g. SCADE/Lustre, Matlab/Simulink, etc).
The actual mapping logic exists in plugins, called *A mappers*
(`simulink_A_mapper.py` handles Simulink/RTW, `scade6_A_mapper.py`
handles SCADE5, `ada_A_mapper.py` generates Ada types,
handles SCADE6, `ada_A_mapper.py` generates Ada types,
`sqlalchemy_A_mapper.py`, generates SQL definitions via SQLAlchemy, etc)
- **aadl2glueC** (*utility*)
......@@ -29,23 +65,22 @@ TASTE Data Modelling Technologies
Reads the AADL specification of the system, and then generates the runtime
bridge-code that will map the message data structures from those generated
by [ASN1SCC](https://github.com/ttsiodras/asn1scc) to/from those generated
by the modeling tool used to functionally model the subsystem (e.g. SCADE,
ObjectGeode, Matlab/Simulink, C, Ada, etc).
CHANGELOG
1.2.5
python_B_mapper: load PythonAccess.so from current folder
by the modeling tool (that is used to functionally model the subsystem -
e.g. SCADE, ObjectGeode, Matlab/Simulink, C, Ada, etc).
1.2.3
ctypes backend: emit all ENUMERATED values in DV.py
Contact
-------
1.2.0
Not using SWIG anymore for the Python mappers
For bug reports, please use the Issue Tracker; for any other communication,
contact me at:
1.1.2
pyside_b_mapper: added combo box for the asn1 value editor
Thanassis Tsiodras
Real-time Embedded Software Engineer
System, Software and Technology Department
European Space Agency
1.1.1
support empty SEQUENCEs (MyType ::= SEQUENCE {}) - except for Simulink
ESTEC
Keplerlaan 1, PO Box 299
NL-2200 AG Noordwijk, The Netherlands
Athanasios.Tsiodras@esa.int | www.esa.int
T +31 71 565 5332
#
# (C) Semantix Information Technologies.
#
# Semantix Information Technologies is licensing the code of the
# Data Modelling Tools (DMT) in the following dual-license mode:
#
# Commercial Developer License:
# The DMT Commercial Developer License is the suggested version
# to use for the development of proprietary and/or commercial software.
# This version is for developers/companies who do not want to comply
# with the terms of the GNU Lesser General Public License version 2.1.
#
# GNU LGPL v. 2.1:
# This version of DMT is the one to use for the development of
# applications, when you are willing to comply with the terms of the
# GNU Lesser General Public License version 2.1.
#
# Note that in both cases, there are no charges (royalties) for the
# generated code.
#
'''
This is the implementation of the code mapper for Ada code.
As initially envisioned, ASSERT technology is not supposed
to support manually-made systems. A migration path, however,
that allows legacy hand-written code and modelling-tool
generated code to co-exist, can be beneficial in allowing
for a smooth transition. To that end, this backend (as well as
the C one) are written.
This is a backend for Semantix's code generator B (aadl2glueC).
Ada is a member of the asynchronous "club" (SDL, etc);
The subsystem developer (or rather, the APLC developer) is using
native Ada code to work with code generated by modelling tools.
To that end, this backend creates "glue" functions for input and
output parameters, which have Ada callable interfaces.
'''
# from commonPy.utility import panic
# from recursiveMapper import RecursiveMapper
# from asynchronousTool import ASynchronousToolGlueGenerator
import c_B_mapper
isAsynchronous = True
adaBackend = None
cBackend = None
def Version():
print "Code generator: " + "$Id: ada_B_mapper.py 2382 2012-06-22 08:35:33Z ttsiodras $" # pragma: no cover
# All the ada B mapper is now Obsolete, we are using ASN1SCC for Dumpables
#
# class FromDumpableCtoASN1SCC(RecursiveMapper):
# def __init__(self):
# self.uniqueID = 0
# def UniqueID(self):
# self.uniqueID += 1
# return self.uniqueID
# def DecreaseUniqueID(self):
# self.uniqueID -= 1
# def MapInteger(self, srcCVariable, destVar, _, __, ___):
# return ["%s = %s;\n" % (destVar, srcCVariable)]
# def MapReal(self, srcCVariable, destVar, _, __, ___):
# return ["%s = %s;\n" % (destVar, srcCVariable)]
# def MapBoolean(self, srcCVariable, destVar, _, __, ___):
# return ["%s = %s;\n" % (destVar, srcCVariable)]
# def MapOctetString(self, srcCVariable, destVar, _, __, ___):
# lines = []
# lines.append("{\n")
# lines.append(" int i;\n")
# lines.append(" for(i=0; i<%s.length; i++)\n" % srcCVariable)
# lines.append(" %s.arr[i] = %s.content[i];\n" % (destVar, srcCVariable))
# lines.append(" %s.nCount = %s.length;\n" % (destVar, srcCVariable))
# lines.append("}\n")
# return lines
# def MapEnumerated(self, srcCVariable, destVar, _, __, ___):
# return ["%s = %s;\n" % (destVar, srcCVariable)]
# def MapSequence(self, srcCVariable, destVar, node, leafTypeDict, names):
# lines = []
# for child in node._members:
# lines.extend(
# self.Map(
# "%s.%s" % (srcCVariable, self.CleanName(child[0])),
# destVar + "." + self.CleanName(child[0]),
# child[1],
# leafTypeDict,
# names))
# return lines
# def MapSet(self, srcCVariable, destVar, node, leafTypeDict, names):
# return self.MapSequence(srcCVariable, destVar, node, leafTypeDict, names)
# def MapChoice(self, srcCVariable, destVar, node, leafTypeDict, names):
# lines = []
# childNo = 0
# for child in node._members:
# childNo += 1
# lines.append("%sif (%s.choiceIdx == %d) {\n" %
# (self.maybeElse(childNo), srcCVariable, childNo))
# lines.extend([' '+x for x in self.Map(
# "%s.u.%s" % (srcCVariable, self.CleanName(child[0])),
# destVar + ".u." + self.CleanName(child[0]),
# child[1],
# leafTypeDict,
# names)])
# lines.append(" %s.kind = %s_PRESENT;\n" % (destVar, self.CleanName(child[0])))
# lines.append("}\n")
# return lines
# def MapSequenceOf(self, srcCVariable, destVar, node, leafTypeDict, names):
# lines = []
# lines.append("{\n")
# uniqueId = self.UniqueID()
# lines.append(" int i%s;\n" % uniqueId)
# lines.append(" for(i%s=0; i%s<%s.length; i%s++) {\n" % (uniqueId, uniqueId, srcCVariable, uniqueId))
# lines.extend([" " + x for x in self.Map(
# "%s.content[i%s]" % (srcCVariable, uniqueId),
# "%s.arr[i%s]" % (destVar, uniqueId),
# node._containedType,
# leafTypeDict,
# names)])
# lines.append(" }\n")
# lines.append(" %s.nCount = %s.length;\n" % (destVar, srcCVariable))
# lines.append("}\n")
# self.DecreaseUniqueID()
# return lines
# def MapSetOf(self, srcCVariable, destVar, node, leafTypeDict, names):
# return self.MapSequenceOf(srcCVariable, destVar, node, leafTypeDict, names)
#
# class FromASN1SCCtoDumpableC(RecursiveMapper):
# def __init__(self):
# self.uniqueID = 0
# def UniqueID(self):
# self.uniqueID += 1
# return self.uniqueID
# def DecreaseUniqueID(self):
# self.uniqueID -= 1
# def MapInteger(self, srcCVariable, destVar, _, __, ___):
# return ["%s = %s;\n" % (destVar, srcCVariable)]
# def MapReal(self, srcCVariable, destVar, _, __, ___):
# return ["%s = %s;\n" % (destVar, srcCVariable)]
# def MapBoolean(self, srcCVariable, destVar, _, __, ___):
# return ["%s = %s;\n" % (destVar, srcCVariable)]
# def MapOctetString(self, srcCVariable, destVar, _, __, ___):
# lines = []
# lines.append("{\n")
# lines.append(" int i;\n")
# lines.append(" for(i=0; i<%s.nCount; i++)\n" % srcCVariable)
# lines.append(" %s.content[i] = %s.arr[i];\n" % (destVar, srcCVariable))
# lines.append(" %s.length = %s.nCount;\n" % (destVar, srcCVariable))
# lines.append("}\n")
# return lines
# def MapEnumerated(self, srcCVariable, destVar, _, __, ___):
# return ["%s = %s;\n" % (destVar, srcCVariable)]
# def MapSequence(self, srcCVariable, destVar, node, leafTypeDict, names):
# lines = []
# for child in node._members:
# lines.extend(
# self.Map(
# "%s.%s" % (srcCVariable, self.CleanName(child[0])),
# destVar + "." + self.CleanName(child[0]),
# child[1],
# leafTypeDict,
# names))
# return lines
# def MapSet(self, srcCVariable, destVar, node, leafTypeDict, names):
# return self.MapSequence(srcCVariable, destVar, node, leafTypeDict, names)
# def MapChoice(self, srcCVariable, destVar, node, leafTypeDict, names):
# lines = []
# childNo = 0
# for child in node._members:
# childNo += 1
# lines.append("%sif (%s.kind == %s_PRESENT) {\n" %
# (self.maybeElse(childNo), srcCVariable, self.CleanName(child[0])))
# lines.extend([' '+x for x in self.Map(
# "%s.u.%s" % (srcCVariable, self.CleanName(child[0])),
# destVar + ".u." + self.CleanName(child[0]),
# child[1],
# leafTypeDict,
# names)])
# lines.append(" %s.choiceIdx = %d;\n" % (destVar, childNo))
# lines.append("}\n")
# return lines
# def MapSequenceOf(self, srcCVariable, destVar, node, leafTypeDict, names):
# lines = []
# lines.append("{\n")
# uniqueId = self.UniqueID()
# lines.append(" int i%s;\n" % uniqueId)
# lines.append(" for(i%s=0; i%s<%s.nCount; i%s++) {\n" % (uniqueId, uniqueId, srcCVariable, uniqueId))
# lines.extend([" " + x for x in self.Map(
# "%s.arr[i%s]" % (srcCVariable, uniqueId),
# "%s.content[i%s]" % (destVar, uniqueId),
# node._containedType,
# leafTypeDict,
# names)])
# lines.append(" }\n")
# lines.append(" %s.length = %s.nCount;\n" % (destVar, srcCVariable))
# lines.append("}\n")
# self.DecreaseUniqueID()
# return lines
# def MapSetOf(self, srcCVariable, destVar, node, leafTypeDict, names):
# return self.MapSequenceOf(srcCVariable, destVar, node, leafTypeDict, names)
#
# class Ada_GlueGenerator(ASynchronousToolGlueGenerator):
# def __init__(self):
# ASynchronousToolGlueGenerator.__init__(self)
# self.FromDumpableCtoASN1SCC = FromDumpableCtoASN1SCC()
# self.FromASN1SCCtoDumpableC = FromASN1SCCtoDumpableC()
# self.Ada_HeaderFile = None
# self.Ada_SourceFile = None
# self.definedTypes = {}
# def Version(self):
# print "Code generator: " + "$Id: ada_B_mapper.py 2382 2012-06-22 08:35:33Z ttsiodras $"
# def HeadersOnStartup(self, unused_asnFile, unused_outputDir, unused_maybeFVname):
# if self.useOSS:
# self.C_HeaderFile.write("#include \"%s.oss.h\" // OSS generated\n\n" % self.asn_name)
# self.C_SourceFile.write("\nextern OssGlobal *g_world;\n\n")
# self.C_HeaderFile.write("#include \"%s.h\" // Space certified compiler generated\n\n" % self.asn_name)
# self.C_HeaderFile.write("#include \"DumpableTypes.h\"\n\n")
# def Encoder(self, nodeTypename, node, leafTypeDict, names, encoding):
# if encoding.lower() not in self.supportedEncodings:
# panic(str(self.__class__) + ": in (%s), encoding can be one of %s (not '%s')" %
# (nodeTypename, self.supportedEncodings, encoding))
#
# # Definition of the standard encoding function (same interface as the C mapper )
# cBackend.Encoder(nodeTypename, node, leafTypeDict, names, encoding)
# # End standard encoding function
#
# # in order not to duplicate conversion functions, skip the rest if encoding is native
# if encoding.lower() == "native":
# return
#
# if not self.definedTypes.has_key(nodeTypename):
# self.definedTypes[nodeTypename] = 1
# # Declare/define the C stub variable (one per ASN.1 type)
# self.C_HeaderFile.write("\n/* --- Staging var for %s --- */\n" % (nodeTypename))
#
# tmpTypeName = "asn1Scc%s" % self.CleanNameAsToolWants(nodeTypename)
# tmpVarName = "asn1scc"
# tmpSpName = "Ada_to_SCC_%s" % \
# self.CleanNameAsToolWants(nodeTypename)
#
# self.C_HeaderFile.write(
# "void %s(GT__%s *ada, %s *%s);\n" %
# (tmpSpName,
# self.CleanNameAsToolWants(nodeTypename),
# tmpTypeName,
# tmpVarName))
# self.C_SourceFile.write(
# "void %s(GT__%s *ada, %s *%s)\n{\n" %
# (tmpSpName,
# self.CleanNameAsToolWants(nodeTypename),
# tmpTypeName,
# tmpVarName))
#
# lines = self.FromDumpableCtoASN1SCC.Map(
# "(*ada)",
# "(*asn1scc)",
# node,
# leafTypeDict,
# names)
# lines = [" "+x for x in lines]
#
# self.C_SourceFile.write("".join(lines))
# self.C_SourceFile.write("}\n\n")
#
# def Decoder(self, nodeTypename, node, leafTypeDict, names, encoding):
# if encoding.lower() not in self.supportedEncodings:
# panic(str(self.__class__) + ": in (%s), encoding can be one of %s (not '%s')" %
# (nodeTypename, self.supportedEncodings, encoding))
#
# # Definition of the standard decoding function (same interface as the C mapper )
# cBackend.Decoder(nodeTypename, node, leafTypeDict, names, encoding)
# # End standard decoding function
#
# if encoding.lower() == "native":
# return
#
# tmpTypeName = "asn1Scc%s" % self.CleanNameAsToolWants(nodeTypename)
# tmpVarName = "asn1scc"
# tmpSpName = "SCC_to_Ada_%s" % self.CleanNameAsToolWants(nodeTypename)
#
# # Create C function that does the encoding
# self.C_HeaderFile.write(
# "void %s(%s *%s, GT__%s *ada);\n" %
# (tmpSpName,
# tmpTypeName,
# tmpVarName,
# self.CleanNameAsToolWants(nodeTypename)))
# self.C_SourceFile.write(
# "void %s(%s *%s, GT__%s *ada)\n{\n" %
# (tmpSpName,
# tmpTypeName,
# tmpVarName,
# self.CleanNameAsToolWants(nodeTypename)))
#
# lines = self.FromASN1SCCtoDumpableC.Map(
# "(*asn1scc)",
# "(*ada)",
# node,
# leafTypeDict,
# names)
# lines = [" "+x for x in lines]
#
# self.C_SourceFile.write("".join(lines))
# self.C_SourceFile.write("}\n\n")
#
# def OnShutdown(self, modelingLanguage, asnFile, maybeFVname):
# ASynchronousToolGlueGenerator.OnShutdown(self, modelingLanguage, asnFile, maybeFVname)
def OnStartup(unused_modelingLanguage, asnFile, outputDir, maybeFVname, useOSS):
global cBackend
# 2009-02-10: Since we now use ASN1SCC structures as dumpables (even for Ada)
# we no longer need these Ada-specific Dumpable structures.
#global adaBackend
#adaBackend = Ada_GlueGenerator()
cBackend = c_B_mapper.C_GlueGenerator()
#adaBackend.OnStartup(modelingLanguage, asnFile, outputDir, maybeFVname, useOSS)
cBackend.OnStartup("C", asnFile, outputDir, maybeFVname, useOSS)
def OnBasic(nodeTypename, node, leafTypeDict, names):
cBackend.OnBasic(nodeTypename, node, leafTypeDict, names)
def OnSequence(nodeTypename, node, leafTypeDict, names):
cBackend.OnSequence(nodeTypename, node, leafTypeDict, names)
def OnSet(nodeTypename, node, leafTypeDict, names):
cBackend.OnSet(nodeTypename, node, leafTypeDict, names) # pragma: nocover
def OnEnumerated(nodeTypename, node, leafTypeDict, names):
cBackend.OnEnumerated(nodeTypename, node, leafTypeDict, names)
def OnSequenceOf(nodeTypename, node, leafTypeDict, names):
cBackend.OnSequenceOf(nodeTypename, node, leafTypeDict, names)
def OnSetOf(nodeTypename, node, leafTypeDict, names):
cBackend.OnSetOf(nodeTypename, node, leafTypeDict, names) # pragma: nocover
def OnChoice(nodeTypename, node, leafTypeDict, names):
cBackend.OnChoice(nodeTypename, node, leafTypeDict, names)
def OnShutdown(unused_modelingLanguage, asnFile, maybeFVname):
cBackend.OnShutdown("C", asnFile, maybeFVname)
../commonPy/recursiveMapper.py
\ No newline at end of file
og_B_mapper.py
\ No newline at end of file
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
asn2aadlPlus converts ASN.1 modules to AADL (v1 or v2) for use in TASTE
"""
from asn2aadlPlus import main
__version__ = 1.0
../commonPy/createInternalTypes.py
\ No newline at end of file
machine:
post:
- pyenv global 3.4.4 system
dependencies:
cache_directories:
- "~/.apt-cache"
pre:
- sudo rm -rf /var/cache/apt/archives && sudo ln -s ~/.apt-cache /var/cache/apt/archives && mkdir -p ~/.apt-cache/partial
- sudo apt-get update
- sudo apt-get install libxslt-dev libxml2-dev mono-complete
- wget -O - -q https://github.com/ttsiodras/asn1scc/releases/download/3.2.81/asn1scc-bin-3.2.81.tar.gz | tar zxvf -
- wget -O - -q https://github.com/ttsiodras/DataModellingTools/files/335591/antlr-2.7.7.tar.gz | tar zxvf - ; cd antlr-2.7.7/lib/python ; pip2 install .
override:
- pip3 install -r requirements.txt
test:
override:
- PATH=$PATH:$(pwd)/asn1scc make
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
This module contains the shared API for parsing ASN.1
"""
import configMT
import asnParser
import asnAST
import utility
import createInternalTypes
import verify
import recursiveMapper
import cleanupNodes
__version__ = "1.2.5"
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
......@@ -21,7 +21,7 @@
import os
import re
import copy
import DV_Types
import DV_Types # pylint: disable=import-error
from ctypes import (
cdll, c_void_p, c_ubyte, c_double, c_uint,
c_longlong, c_bool, c_int, c_long
......@@ -102,7 +102,7 @@ class DataStream(object):
# print "Reading",
msg = ""
pData = c_void_p(GetBitstreamBuffer(self._bs))
for i in xrange(0, GetStreamCurrentLength(self._bs)):
for i in range(0, GetStreamCurrentLength(self._bs)):
b = GetBufferByte(pData, i)
msg += chr(b)
# print b, ",",
......@@ -115,7 +115,7 @@ class DataStream(object):
self._bs.count = strLength