Author: iafwizdoe2ej

  • otel-demo

    OpenTelemetry and Dynatrace

    Microservice-based demo project showcasing Dynatrace’s tracing functionality in combination with OpenTelemetry.

    Architecture

    The Showcase

    The application itself does not really have a specific purpose nor offers a beautiful UI. It just provides one simple endpoint for users to sign up with email, name and password and sends out a confirmation mail afterwards. All service communication is done via HTTP.

    While there are some surrounding services to make this a more representative example, the main components are the following:

    1. Backend Service
    2. Mail Service
    3. Template Service

    Backend Service and Template Service are going to be monitored via the OneAgent and will create some custom OpenTelemetry spans via manual instrumentation. The service in the middle – the Mail Service – is going to be instrumented with OpenTelemetry only.

    The signup procedure can be described in 6 simple steps:

    1. Signup-endpoint gets called with an HTTP-post call having email, name and password in the body
    2. After email address validation, the user gets stored into the mongo database
    3. Backend Service calls Mail Service’s send-endpoint via HTTP-post to send a signup confirmation mail
    4. Mail Service calls Template Service via gRPC to render the email body
    5. After rendering the email, the Template Service stores the result in the Redis cache and returns
    6. Finally, the Mail Service calls an external mail-as-a-service provider (e.g. Sendgrid) to send the email

    Run the Demo

    There are two ways to run this demo:

    • Docker Compose – OneAgent will be deployed as Docker container
    • Kubernetes – OneAgent deployment via Dynatrace Operator

    WIP Get Hands-on Experience

    If you want to get some hands-on experience in instrumenting NodeJS apps and using Dynatrace, check out the tutorial branch.

    Having problems or facing issues?

    Reach out to me via email: martin.nirtl@dynatrace.com

    Visit original content creator repository https://github.com/martinnirtl/otel-demo
  • ses-azure

    ses-azure

    Scripts to build and manage a SUSE Enterprise Storage cluster on Azure

    How to Use the Scripts

    These are bash scripts, so obviously they need Linux or a bash shell on whatever platform you’re using. The script names should be self-explanatory. They also require the Azure CLI to be installed, and a file named variables.sh in the same directory.

    Specific Scripts

    Name Notes
    build_node.sh This is just a copy of https://github.com/dmbyte/SES-scripts/blob/master/clusterbuilder/buildit.sh.
    create_cluster.sh Create the cluster by running create_nics.sh, create_vms.sh, and create_disks.sh.
    create_disks.sh Create and attach disks ot all the OSD nodes. You need to specify how many OSD nodes and how many disks per node.
    create_nics.sh Create vNICs with accelerated networking for admin, test, and OSD VMs. You need to specify how many of each VM type.
    create_test_vms.sh Create VMs to be used as test clients for the cluster. You need to specify how many VMs to create.
    create_vms.sh Create admin, test, and OSD VMs. You need to specify how many of each VM type to create.
    delete_disks.sh Delete all disks in the specified resource group.
    delete_vms.sh Delete all VMs in the specified resource group.
    detach_disks.sh Detach disks from all the OSD nodes. You need to specify how many OSD nodes and how disks per node.
    reattach_disks.sh Attach existing disks to all the OSD nodes. You need to specify how many OSD nodes and how disks per node.
    resize_vms.sh Resize the OSD VMs. You need to specify how many OSD nodes.
    setup node Not an actual script, but contains snippets that can be used to register and deploy code on the nodes once created.
    shutdown_vms.sh Shutdown all VMs in the specified resource group.
    startup_vms.sh Start up all VMs in the specified resource group.
    variables.sh See below for required contents.

    Variables.sh Content

    #/bin/sh
    
    # Resource group for the cluster
    export RESOURCE_GROUP=$PREFIX-resource-group
    
    # Availability set for the cluster
    export AVAILABILITY_SET=$PREFIX-availability-set
    
    # Username to log in to the nodes
    export USERNAME=
    
    # Password to log in to the nodes
    export PASSWORD=
    
    # TODO: Use SSH key to log in
    
    # Network security group to use for all nodes
    export NSG=$PREFIX-nsg
    
    # Location for the nodes
    export LOCATION=
    
    # Name of VNet
    export VNET=$PREFIX-vnet
    
    # Name of Subnet
    export SUBNET=$PREFIX-subnet
    
    # URN of Image
    export IMAGE=
    

    Notes on accessing Ceph Dashboard

    The OSD nodes are built without public internet IP addresses, so they will need to be accessed via port routing on the admin node.
    Using SSH, you can use a command like this:

    sh -L 8443:<prefix>-osd-<n>:8443 -L 3000:<prefix>-admin:3000 sesadmin@<prefix>-admin-public-ip.eastus.cloudapp.azure.com
    

    And then hit https://localhost:8443/ to access the dashboard. You will also need an entry in your /etc/hosts file to map the
    admin node’s public IP address to the internal name of the admin host, like so:

    <ip address>	<prefix>-admin.internal.cloudapp.net
    

    Visit original content creator repository
    https://github.com/nbornstein/ses-azure

  • ses-azure

    ses-azure

    Scripts to build and manage a SUSE Enterprise Storage cluster on Azure

    How to Use the Scripts

    These are bash scripts, so obviously they need Linux or a bash shell on whatever platform you’re using. The script names should be self-explanatory. They also require the Azure CLI to be installed, and a file named variables.sh in the same directory.

    Specific Scripts

    Name Notes
    build_node.sh This is just a copy of https://github.com/dmbyte/SES-scripts/blob/master/clusterbuilder/buildit.sh.
    create_cluster.sh Create the cluster by running create_nics.sh, create_vms.sh, and create_disks.sh.
    create_disks.sh Create and attach disks ot all the OSD nodes. You need to specify how many OSD nodes and how many disks per node.
    create_nics.sh Create vNICs with accelerated networking for admin, test, and OSD VMs. You need to specify how many of each VM type.
    create_test_vms.sh Create VMs to be used as test clients for the cluster. You need to specify how many VMs to create.
    create_vms.sh Create admin, test, and OSD VMs. You need to specify how many of each VM type to create.
    delete_disks.sh Delete all disks in the specified resource group.
    delete_vms.sh Delete all VMs in the specified resource group.
    detach_disks.sh Detach disks from all the OSD nodes. You need to specify how many OSD nodes and how disks per node.
    reattach_disks.sh Attach existing disks to all the OSD nodes. You need to specify how many OSD nodes and how disks per node.
    resize_vms.sh Resize the OSD VMs. You need to specify how many OSD nodes.
    setup node Not an actual script, but contains snippets that can be used to register and deploy code on the nodes once created.
    shutdown_vms.sh Shutdown all VMs in the specified resource group.
    startup_vms.sh Start up all VMs in the specified resource group.
    variables.sh See below for required contents.

    Variables.sh Content

    #/bin/sh
    
    # Resource group for the cluster
    export RESOURCE_GROUP=$PREFIX-resource-group
    
    # Availability set for the cluster
    export AVAILABILITY_SET=$PREFIX-availability-set
    
    # Username to log in to the nodes
    export USERNAME=
    
    # Password to log in to the nodes
    export PASSWORD=
    
    # TODO: Use SSH key to log in
    
    # Network security group to use for all nodes
    export NSG=$PREFIX-nsg
    
    # Location for the nodes
    export LOCATION=
    
    # Name of VNet
    export VNET=$PREFIX-vnet
    
    # Name of Subnet
    export SUBNET=$PREFIX-subnet
    
    # URN of Image
    export IMAGE=
    

    Notes on accessing Ceph Dashboard

    The OSD nodes are built without public internet IP addresses, so they will need to be accessed via port routing on the admin node.
    Using SSH, you can use a command like this:

    sh -L 8443:<prefix>-osd-<n>:8443 -L 3000:<prefix>-admin:3000 sesadmin@<prefix>-admin-public-ip.eastus.cloudapp.azure.com
    

    And then hit https://localhost:8443/ to access the dashboard. You will also need an entry in your /etc/hosts file to map the
    admin node’s public IP address to the internal name of the admin host, like so:

    <ip address>	<prefix>-admin.internal.cloudapp.net
    

    Visit original content creator repository
    https://github.com/nbornstein/ses-azure

  • PPGetAddressBook

    image

    • PPGetAddressBook对AddressBook框架(iOS9之前)和Contacts框架(iOS9之后)做了对应的封装处理;

    • 支持一句代码获取按联系人姓名首字拼音A~Z排序(重点:已处理姓名所有字符的排序问题,排序更准确!);

    • 支持一句代码获取原始顺序的联系人,未分组,可自行处理;

    • 已对号码中的”+86″,”-“,”()”,空号和联系人姓名空白做了处理,不会出现因为数据源NULL导致程序crash的问题;

    • 对姓”长”,”沈”,”厦”,”地”,”冲”多音字进行优化处理.

    新建 PP-iOS学习交流群 : 323408051 有关于PP系列封装的问题和iOS技术可以在此群讨论

    简书地址 ; codeData 地址

    如果你需要Swift版本,请戳: https://github.com/jkpang/PPGetAddressBookSwift

    image

    Requirements 要求

    • iOS 7+
    • Xcode 8+

    Installation 安装

    1.手动安装:

    下载DEMO后,将子文件夹PPGetAddressBook拖入到项目中, 导入头文件PPGetAddressBook.h开始使用

    2.CocoaPods安装:

    first pod 'PPGetAddressBook',:git => 'https://github.com/jkpang/PPGetAddressBook.git'

    then pod install或pod install --no-repo-update

    如果发现pod search PPGetAddressBook 不是最新版本,在终端执行pod setup命令更新本地spec镜像缓存(时间可能有点长),重新搜索就OK了

    Usage 使用方法

    *注意, 在iOS 10系统下必须在info.plist文件中配置获取隐私数据权限声明 : 兼容iOS 10:配置获取隐私数据权限声明

    一、首先必须要请求用户是否授权APP访问通讯录的权限(建议在APPDeletegate.m中的didFinishLaunchingWithOptions方法中调用)

         //请求用户获取通讯录权限
        [PPGetAddressBook requestAddressBookAuthorization];

    二、获取通讯录

    1.获取按联系人姓名首字拼音A~Z排序(已处理姓名所有字符的排序问题),一句话搞定!

        //获取按联系人姓名首字拼音A~Z排序(已经对姓名的第二个字做了处理)
        [PPGetAddressBook getOrderAddressBook:^(NSDictionary<NSString *,NSArray *> *addressBookDict, NSArray *nameKeys) {
            //addressBookDict: 装着所有联系人的字典
            //nameKeys: A~Z拼音字母数组;
            //刷新 tableView       
            [self.tableView reloadData];
        } authorizationFailure:^{
            NSLog(@"请在iPhone的“设置-隐私-通讯录”选项中,允许PPAddressBook访问您的通讯录");
        }];
    
       

    2.获取原始顺序的联系人模型,未分组,一句话搞定!

        //获取没有经过排序的联系人模型
        [PPGetAddressBook getOriginalAddressBook:^(NSArray<PPPersonModel *> *addressBookArray) {
           //addressBookArray:原始顺序的联系人模型数组
           
           //刷新 tableView       
            [self.tableView reloadData];
        } authorizationFailure:^{
           NSLog(@"请在iPhone的“设置-隐私-通讯录”选项中,允许PPAddressBook访问您的通讯录");
        }];
        

    如果你有更好的实现方法,希望不吝赐教!

    你的star是我持续更新的动力!

    ===

    CocoaPods更新日志

    • 2016.12.01(tag:0.2.8)–修复在iOS 9之前系统中编辑联系人不会及时同步的bug
    • 2016.10.30(tag:0.2.7)–1.对姓”长”,”沈”,”厦”,”地”,”冲”多音字进行优化处理; 2.将’#’key值排列在A~Z的末尾!
    • 2016.10.08(tag:0.2.6)–读取联系人速度再次提升!
    • 2016.09.16(tag:0.2.5)–读取排序通讯录时性能提升3~6倍以及部分代码优化,推荐使用此版本及之后的版本
    • 2016.09.12(tag:0.2.2)–小细节优化
    • 2016.09.01(tag:0.2.1)–修复 当用户没有授权时程序卡死的Bug
    • 2016.08.26(tag:0.2.0)–将联系人排序的耗时操作放在子线程,大大优化程序的载入速度与体验
    • 2016.08.23(tag:0.1.2)–小细节优化
    • 2016.08.21(tag:0.1.1)–Pods初始化

    我的App <-> My APP

    • PPHub:一个简洁漂亮的 GitHub iOS客户端 <-> A simple and beautiful GitHub iOS client
      App_Store

    联系方式:

    PP-iOS学习交流群群二维码

    许可证

    PPGetAddressBook 使用 MIT 许可证,详情见 LICENSE 文件。

    Visit original content creator repository https://github.com/jkpang/PPGetAddressBook
  • SiberianIngrianFinnish

    The Siberian Ingrian Finnish Language.

    This project is devoted to the Siberian Ingrian Finnish language. Siberian Ingrian Finnish – is a language (dialect) used by the descendants of the settlers who spoke Lower Luga Ingrian Finnish varieties and Lower Luga Ingrian (Izhorian) who have been living in Omsk oblast (previously they lived also in other regions of the Siberia) for more than 200 years. The ancestors of the speakers of Siberian Ingrian Finnish came from the Lower Luga area in the early 19th century. They came from the Rosona river area, to be exact. This region is also called Estonian Ingria. Siberian Ingrian Finnish (Russian: Сибирский ингерманландский идиом) is the term introduced by D. V. Sidorkevich.

    References:

    1. Сидоркевич, Д. В. (Sidorkevich, Daria) (2014). Язык ингерманландских переселенцев в Сибири: структура, диалектные особенности, контактные явления (Doctoral dissertation, Ин-т лингвист. исслед. РАН (СПб)). https://iling.spb.ru/theses/1999 (In Russian)
    2. Sidorkevich, Daria (2011). On domains of adessive-allative in Siberian Ingrian Finnish. Acta Linguistica Petropolitana, 7(3). https://cyberleninka.ru/article/n/on-domains-of-adessive-allative-in-siberian-ingrian-finnish/viewer
    3. Kuznetsova, Natalia (2016). Evolution of the non-initial vocalic length contrast across the Finnic varieties of Ingria and adjacent areas. Linguistica Uralica, 52(1), 1-25. https://publicatt.unicatt.it/retrieve/handle/10807/143760/240945/ling-2016-1-1-25%28uus%29.pdf The part of this paper about Siberian Ingrian Finnish (mixed Siberian Ingrian/Finnish dialect).
    4. Ubaleht, Ivan (2020). The Creation of Siberian Ingrian Finnish and Siberian Tatar Speech Corpora. Workshop on RESOURCEs and representations For Under-resourced Languages and domains (RESOURCEFUL-2020) at SLTC, Gothenburg, Sweden, 25th November 2020. https://gu-clasp.github.io/resourceful-2020/papers/RESOURCEFUL-2020_paper_5.pdf
    5. Ubaleht, I. (2021, March). Lexeme: the Concept of System and the Creation of Speech Corpora for Two Endangered Languages. In Proceedings of the Workshop on Computational Methods for Endangered Languages (Vol. 2, pp. 20-23).
      https://journals.colorado.edu/index.php/computel/article/view/981
      https://computel-workshop.org/wp-content/uploads/2021/03/2021.computel-2.5.pdf
    6. Ivan Ubaleht and Taisto-Kalevi Raudalainen. 2022. Development of the Siberian Ingrian Finnish Speech Corpus. In Proceedings of the Fifth Workshop on the Use of Computational Methods in the Study of Endangered Languages, pages 1–4, Dublin, Ireland. Association for Computational Linguistics.
      DOI: 10.18653/v1/2022.computel-1.1
      https://aclanthology.org/2022.computel-1.1.pdf
    7. Злобина, Виено (Zlobina, Vieno) (1971). “Кто такие корлаки?” [Who are Korlaks?]. Советское финно-угроведение, 2, pp. 87–91. (In Russian)
    8. Zlobina, Vieno (1972). “Mitä alkujuurta Siperian suomalaiset ja korlakat ovat”. Kotiseutu, 2 (3), pp. 86–92. (In Finnish)
    9. Nirvi, Ruben (1972). “Siperian inkeriläisten murteesta ja alkuperästä”. Kotiseutu, 2 (3), pp. 92–95. (In Finnish)
    10. Wikipedia article about Siberian Ingrian Finnish https://en.wikipedia.org/wiki/Siberian_Ingrian_Finnish

    Speech data of Siberian Ingrian Finnish

    You can download the primary speech data for the Siberian Ingrian Finnish corpus here:
    https://drive.google.com/drive/folders/1csw-_n2TzQa_AQObGBJP8x-S8ZH_h9E9

    Stay tuned for more updates…

    Video data of Siberian Ingrian Finnish

    Speaker JuMS-28: https://www.youtube.com/watch?v=YqwrK6sItHI

    License

    All data of Siberian Ingrian Finnish in this repository are licensed under the CC BY 4.0: https://creativecommons.org/licenses/by/4.0/

    If you use these materials, please publish reference to paper:
    Ivan Ubaleht and Taisto-Kalevi Raudalainen. 2022. Development of the Siberian Ingrian Finnish Speech Corpus. In Proceedings of the Fifth Workshop on the Use of Computational Methods in the Study of Endangered Languages, pages 1–4, Dublin, Ireland. Association for Computational Linguistics.
    DOI: 10.18653/v1/2022.computel-1.1

    or reference to the corpus’ repository: https://github.com/ubaleht/SiberianIngrianFinnish

    Software

    Description of the Speakers

    Code of the Speaker and Gender Year of Birth Current Place of Residence Place of the Birth Birthplace of parents Speech Data (Duration)
    AAK-47 (M) 1947 Ryzhkovo Syade mother: Ryzhkovo, father: no data 40 min 57 s
    IAI-33 (F) 1933 Oglukhino Ryzhkovo both parents: Ryzhkovo 33 min 14 s
    JuMS-28 (M) 1928 Ryzhkovo Ryzhkovo both parents: Ryzhkovo 77 min 53 s
    KKM-34 (M) 1934 Ryzhkovo Ryzhkovo both parents: Ryzhkovo 31 min 29 s
    MAP-49 (F) 1949 Ryzhkovo Ryzhkovo both parents: Ryzhkovo 30 min 36 s
    MMM-39 (M) 1939 Ryzhkovo Ryzhkovo both parents: Ryzhkovo 62 min 20 s
    PGM-56 (F) 1956 Omsk Finy both parents: Finy 8 min 20 s
    SVM-29 (M) 1929 Mikhailovka Larionovka both parents: Yamburgsky Uyezd, Saint Petersburg Governorate 10 min 36 s
    KZM-51 (F) 1951 Ryzhkovo Ryzhkovo mother: Ryzhkovo, father: no data 3 min 35 s


    Visit original content creator repository
    https://github.com/ubaleht/SiberianIngrianFinnish

  • {json:scada}

    JSON:SCADA Logo

    {json:scada}

    A portable and scalable SCADA/IIoT-I4.0 platform centered on the MongoDB database server.

    Mission Statement

    To provide an easy to use, fully-featured, scalable, and portable SCADA/IIoT-I4.0 platform built by leveraging mainstream open-source IT tools.

    Screenshots

    screenshots

    Major features

    • Standard IT tools applied to SCADA/IoT (MongoDB, PostgreSQL/TimescaleDB, Node.js, C#, Golang, Grafana, etc.).
    • MongoDB as the real-time core database, persistence layer, config store, SOE historian.
    • Event-based realtime async data processing with MongoDB Change Streams.
    • Portability and modular interoperability over Linux, Windows, Mac OSX, x86/64, ARM.
    • Windows installer available in the releases section.
    • Unlimited tags, servers, and users.
    • Horizontal scalability, from a single computer to big clusters (MongoDB-sharding), Docker containers, VMs, Kubernetes, cloud, or hybrid deployments.
    • Modular distributed architecture. Lightweight redundant data acquisition nodes can connect securely over TLS to the database server. E.g. a Raspberry PI can be a data acquisition node.
    • Extensibility of the core data model (MongoDB: NoSQL/schema-less).
    • HTML5 Web interface. UTF-8/I18N. Mobile access. Web-based configuration management.
    • Role-based access control (RBAC).
    • Various high-quality protocol drivers.
    • Integration with MQTT brokers (compatibility with Sparkplug B).
    • Live point configuration updates.
    • Inkscape-based SVG synoptic display editor.
    • PostgreSQL/TimescaleDB historian integrated with Grafana for easy creation of dashboards.
    • Easy development of custom applications with modern stacks like MEAN/MERN, etc. Extensive use of JSON from bottom up.
    • Leverage a huge ecosystem of MongoDB/PostgreSQL tools, community, services, etc.
    • Easy AI-helped custom app development using templates/API for tools like WindSurf/Cline/Cursor/Copilot/etc.

    Use cases

    • Protocol Gateway.
    • Secure Protocol Gateway with 1-way air gapped replication (via data diode or tap device).
    • Power/Oil/Gas/Manufacturing/etc Local Station HMI.
    • SCADA for Control Centers.
    • SCADA/IIoT Historian.
    • Intranet/Internet HTTPS Gateway – Visualization Server.
    • Multilevel Systems Integration (SCADA/IIoT/ERP/MES/PLC).
    • Global-Level/Cloud SCADA Systems Integration.
    • Edge processing.
    • Data concentrator for Big Data / ML processing.
    • Digital Transformation, Industry 4.0 enabler.

    Real-world usage

    • 5+ years of usage in 2 big control centers scanning data from 80+ substations, 90k tags.
    • 5+ years of usage as HMI for local operation of circa 40 substations up to 230kV level.

    Architecture

    architecture

    Documentation

    Protocols Roadmap

    • IEC 60870-5-104 Server TCP/TLS
    • IEC 60870-5-104 Client TCP/TLS
    • IEC 60870-5-101 Server Serial/TCP
    • IEC 60870-5-101 Client Serial/TCP
    • IEC 60870-5-103 Client
    • IEC 61850 MMS Client TCP/TLS
    • IEC 61850 MMS Server
    • IEC 61850 GOOSE/SV Client
    • DNP3 Client TCP/UDP/TLS/Serial – Windows x64 only!
    • DNP3 Server TCP/UDP/TLS/Serial
    • MQTT/Sparkplug-B Pub/Sub TCP/TLS
    • Modbus Client via PLC4X-GO
    • ICCP Client TCP/TLS
    • ICCP Server TCP/TLS
    • Telegraf Client (many data sources available such as MQTT, MODBUS, SNMP, …)
    • OPC UA Client TCP/Secure
    • OPC UA Server TCP/Secure
    • OPC UA Historical Data Server
    • OPC DA Client (Windows)
    • OPC AE Client (Windows)
    • OPC DA Server (Windows)
    • CIP Ethernet/IP (libplctag, experimental)
    • Siemens S7
    • BACNET
    • I104M (legacy adapter for some OSHMI drivers)
    • ONVIF Camera control and streaming

    Features Roadmap

    • Web-based Viewers
    • Web-based Configuration Manager
    • Excel-based Configuration
    • JWT Authentication
    • User auth/Role-based Access Control (RBAC)
    • LDAP/AD Authorization
    • Inkscape-based SVG Synoptic Editor
    • Compiled Cyclic Calculations Engine
    • Low-latency/Asynchronous Calculations Engine
    • Customizable Change-Stream Processor (for user implemented scripts)
    • Basic Alarms Processor
    • Advanced Alarms Processor
    • PostgreSQL/TimescaleDB Historian
    • Grafana Integration
    • Metabase Integration (via PostgreSQL/MongoDB connectors)
    • One-way realtime replication (over eth diode/tap device) w/ point db sync and historical backfill
    • Windows Installer
    • Online Demo
    • Docker Demo (docker-compose.yaml scripts)
    • Install Script for RedHat/Rocky 9.4 Linux x86-64 and arm64
    • Install Script for Ubuntu 24.04 Linux x86-64 and arm64
    • Linux Image / VM
    • Supervisor (Linux process manager) examples
    • Project IDX Configuration
    • InfluxDB Integration
    • Telegraf Integration
    • PowerBI Integration (via PostgreSQL connector)
    • PowerBI Direct Integration
    • Kafka/Redpanda/Benthos Integration
    • Eclipse 4diac
    • Supabase Integration
    • NodeRed Integration
    • n8n Integration
    • Alerta Integration (https://alerta.io/)
    • PLC4X-GO Integration (https://plc4x.apache.org/)
    • Example templates/API for fast AI-helped custom app developments
    • Managed Cloud Service
    • Supported LTS versions

    Spin up a free private instance on Google’s Firebase Studio

    With just a Google account, you can spin up a free private instance for test/dev on Google’s Firebase Studio. This is a great way to get started with the project. This will build the code from the Github repo and deploy it to a private Linux VM on the cloud running protocols and providing a web UI for you to interact with. There will be a web-based code editor available for you to develop new apps and view/change the code on the VM. You can also get help from Google’s Gemini AI for coding and other tasks. This is free and there no need to install any software on your local machine.

    See details here.

    Online Demo (substations simulation)

    This demo provides a public IEC 60870-5-104 server port on IP address 150.230.171.172:2404 (common address = 1) for testing.

    The demo data is published as regular MQTT topics to the public broker mqtt://test.mosquitto.org:1883 (about 8600 topics in JsonScadaDemoVPS/# and ACME_Utility/#).

    Data is also published as Sparkplug-B to mqtt://test.mosquitto.org:1883 (about 4300 device metrics in spBv1.0/Sparkplug B Devices/+/JSON-SCADA Server/#). Data/birth messages are compressed by Eclipse Tahu Javascript libs.

    Developer Contact

    Visit original content creator repository https://github.com/riclolsen/json-scada
  • {json:scada}

    JSON:SCADA Logo

    {json:scada}

    A portable and scalable SCADA/IIoT-I4.0 platform centered on the MongoDB database server.

    Mission Statement

    To provide an easy to use, fully-featured, scalable, and portable SCADA/IIoT-I4.0 platform built by leveraging mainstream open-source IT tools.

    Screenshots

    screenshots

    Major features

    • Standard IT tools applied to SCADA/IoT (MongoDB, PostgreSQL/TimescaleDB, Node.js, C#, Golang, Grafana, etc.).
    • MongoDB as the real-time core database, persistence layer, config store, SOE historian.
    • Event-based realtime async data processing with MongoDB Change Streams.
    • Portability and modular interoperability over Linux, Windows, Mac OSX, x86/64, ARM.
    • Windows installer available in the releases section.
    • Unlimited tags, servers, and users.
    • Horizontal scalability, from a single computer to big clusters (MongoDB-sharding), Docker containers, VMs, Kubernetes, cloud, or hybrid deployments.
    • Modular distributed architecture. Lightweight redundant data acquisition nodes can connect securely over TLS to the database server. E.g. a Raspberry PI can be a data acquisition node.
    • Extensibility of the core data model (MongoDB: NoSQL/schema-less).
    • HTML5 Web interface. UTF-8/I18N. Mobile access. Web-based configuration management.
    • Role-based access control (RBAC).
    • Various high-quality protocol drivers.
    • Integration with MQTT brokers (compatibility with Sparkplug B).
    • Live point configuration updates.
    • Inkscape-based SVG synoptic display editor.
    • PostgreSQL/TimescaleDB historian integrated with Grafana for easy creation of dashboards.
    • Easy development of custom applications with modern stacks like MEAN/MERN, etc. Extensive use of JSON from bottom up.
    • Leverage a huge ecosystem of MongoDB/PostgreSQL tools, community, services, etc.
    • Easy AI-helped custom app development using templates/API for tools like WindSurf/Cline/Cursor/Copilot/etc.

    Use cases

    • Protocol Gateway.
    • Secure Protocol Gateway with 1-way air gapped replication (via data diode or tap device).
    • Power/Oil/Gas/Manufacturing/etc Local Station HMI.
    • SCADA for Control Centers.
    • SCADA/IIoT Historian.
    • Intranet/Internet HTTPS Gateway – Visualization Server.
    • Multilevel Systems Integration (SCADA/IIoT/ERP/MES/PLC).
    • Global-Level/Cloud SCADA Systems Integration.
    • Edge processing.
    • Data concentrator for Big Data / ML processing.
    • Digital Transformation, Industry 4.0 enabler.

    Real-world usage

    • 5+ years of usage in 2 big control centers scanning data from 80+ substations, 90k tags.
    • 5+ years of usage as HMI for local operation of circa 40 substations up to 230kV level.

    Architecture

    architecture

    Documentation

    Protocols Roadmap

    • IEC 60870-5-104 Server TCP/TLS
    • IEC 60870-5-104 Client TCP/TLS
    • IEC 60870-5-101 Server Serial/TCP
    • IEC 60870-5-101 Client Serial/TCP
    • IEC 60870-5-103 Client
    • IEC 61850 MMS Client TCP/TLS
    • IEC 61850 MMS Server
    • IEC 61850 GOOSE/SV Client
    • DNP3 Client TCP/UDP/TLS/Serial – Windows x64 only!
    • DNP3 Server TCP/UDP/TLS/Serial
    • MQTT/Sparkplug-B Pub/Sub TCP/TLS
    • Modbus Client via PLC4X-GO
    • ICCP Client TCP/TLS
    • ICCP Server TCP/TLS
    • Telegraf Client (many data sources available such as MQTT, MODBUS, SNMP, …)
    • OPC UA Client TCP/Secure
    • OPC UA Server TCP/Secure
    • OPC UA Historical Data Server
    • OPC DA Client (Windows)
    • OPC AE Client (Windows)
    • OPC DA Server (Windows)
    • CIP Ethernet/IP (libplctag, experimental)
    • Siemens S7
    • BACNET
    • I104M (legacy adapter for some OSHMI drivers)
    • ONVIF Camera control and streaming

    Features Roadmap

    • Web-based Viewers
    • Web-based Configuration Manager
    • Excel-based Configuration
    • JWT Authentication
    • User auth/Role-based Access Control (RBAC)
    • LDAP/AD Authorization
    • Inkscape-based SVG Synoptic Editor
    • Compiled Cyclic Calculations Engine
    • Low-latency/Asynchronous Calculations Engine
    • Customizable Change-Stream Processor (for user implemented scripts)
    • Basic Alarms Processor
    • Advanced Alarms Processor
    • PostgreSQL/TimescaleDB Historian
    • Grafana Integration
    • Metabase Integration (via PostgreSQL/MongoDB connectors)
    • One-way realtime replication (over eth diode/tap device) w/ point db sync and historical backfill
    • Windows Installer
    • Online Demo
    • Docker Demo (docker-compose.yaml scripts)
    • Install Script for RedHat/Rocky 9.4 Linux x86-64 and arm64
    • Install Script for Ubuntu 24.04 Linux x86-64 and arm64
    • Linux Image / VM
    • Supervisor (Linux process manager) examples
    • Project IDX Configuration
    • InfluxDB Integration
    • Telegraf Integration
    • PowerBI Integration (via PostgreSQL connector)
    • PowerBI Direct Integration
    • Kafka/Redpanda/Benthos Integration
    • Eclipse 4diac
    • Supabase Integration
    • NodeRed Integration
    • n8n Integration
    • Alerta Integration (https://alerta.io/)
    • PLC4X-GO Integration (https://plc4x.apache.org/)
    • Example templates/API for fast AI-helped custom app developments
    • Managed Cloud Service
    • Supported LTS versions

    Spin up a free private instance on Google’s Firebase Studio

    With just a Google account, you can spin up a free private instance for test/dev on Google’s Firebase Studio. This is a great way to get started with the project. This will build the code from the Github repo and deploy it to a private Linux VM on the cloud running protocols and providing a web UI for you to interact with. There will be a web-based code editor available for you to develop new apps and view/change the code on the VM. You can also get help from Google’s Gemini AI for coding and other tasks. This is free and there no need to install any software on your local machine.

    See details here.

    Online Demo (substations simulation)

    This demo provides a public IEC 60870-5-104 server port on IP address 150.230.171.172:2404 (common address = 1) for testing.

    The demo data is published as regular MQTT topics to the public broker mqtt://test.mosquitto.org:1883 (about 8600 topics in JsonScadaDemoVPS/# and ACME_Utility/#).

    Data is also published as Sparkplug-B to mqtt://test.mosquitto.org:1883 (about 4300 device metrics in spBv1.0/Sparkplug B Devices/+/JSON-SCADA Server/#). Data/birth messages are compressed by Eclipse Tahu Javascript libs.

    Developer Contact

    Visit original content creator repository https://github.com/riclolsen/json-scada