tcVISION for Big Data


While tcVISION customers have been replicating to Big Data targets for several years now, the initial implementations involved use of tcVISION exits. Now tcVISION has implemented Big Data as a standard output platform.



tcVISION’s support for Big Data as a target is fully integrated alongside traditional Linux/Unix/Windows (LUW) targets such as Oracle Database, IBM Db2 LUW, Software AG ADABAS LUW, IBM Informix, Sybase, Microsoft SQL Server, PostgreSQL and ODBC.



tcVISION can deliver replicated data to Big Data targets through a variety of means: creating files, writing directly into Hadoop HDFS, and via streaming using Apache Kafka as the transport layer. Data can be packaged using standard JSON and CSV protocols.

tcVision Architecture

tcVISION is ready to meet new Big Data requirements, technologies and challenges. Thanks to tcVISION’s flexible architecture, support for new targets — including specialty, NoSQL and analytic databases such as Exasol, IBM Db2 BLU and MongoDB, transport layers and protocols is being continuously added, quickly and with minimal effort.

With tcVISION, real-time Big Data integration can embrace both mainframe (IBM Db2, IMS/DB, DL/1, Software AG ADABAS, CA IDMS, CA Datacom and VSAM) and LUW (IBM Db2 LUW, Oracle, IBM Informix, Sybase, Microsoft SQL Server, PostgreSQL, Software AG ADABAS LUW) sources.

tcVISION Highlights

With tcVISION, data synchronisation between mainframe and Big Data pays off:

• Real-time replication of mainframe data to Big Data enables real-time analytics, offloading mainframe application functionality (e.g., online banking queries, e-Government, etc.) to Big Data with data synchronised between the platforms.

• Replication costs are minimised as only changes are exchanged.

• Mainframe resource usage and costs are minimised.

• Data exchange processes can be designed, deployed and maintained with tcVISION without mainframe knowledge, providing cost savings, quicker delivery and project autonomy in Big Data initiatives.

• Reporting and analytics applications are more comprehensive and valuable when mainframe data can be included in the Big Data platform.

A great part of the added value of modern IT systems is the latency-free data and process integration of transactional and analytical areas. The cross-system integration platform tcVISION is unique, efficient and reliable.



With tcVISION mainframe data can be fast and easily integrated into Big Data based operative applications or Business Intelligence and Analytics in near real-time.



The tcVISION solution is practice approved and is constantly further developed to meet the requirements of the new technologies. The result is the support of Big Data in tcVISION Version 6. In the current version of tcVISION V6 Big Data is a fully integrated output platform and supports the integration with Apache Kafka.



Consequently, tcVISION supports direct streaming of changed data into a Big Data Apache Kafka environment.



Apache Kafka is an Open Source data streaming platform developed by the Apache Software Foundation. The software stands out because it is a distributed system and real-time scalable.



Thus it is best suited to meet the challenges of Big Data requirements. As with all output platforms provided by tcVISION, the data streaming via Apache Kafka is based on official standard interfaces.



The implementation of the Apache Kafka interface is fast and easy.



Data streaming to Apache Kafka as a transport layer enhances the Big Data connectivity of tcVISION.
In addition to Apache Kafka, transport layers to Big Data include the creation of files, direct output to a hadoop file system (HDFS) as well as the output to MongoDB.



The currently used protocols for data transfer to Big Data are JSON, Avro and CSV.



The main focus of the tcVISION integration platform is the supply of a real-time synchronisation to integrate mainframe data into Big Data based solutions.


Data Syncgrinisation Diagram

tcVISION Big Data Features

The tcVISION integration platform consists of a variety of state-of-the-art technology components which cover far more than an ETL process.

Environments Supported by tcVISION

Environments Supported

Mainframe Databases

Non-Mainframe Databases

Big Data / Hadoop Systems

IBM z Systems

z/OS
z/VSE
Linux on z Systems

Distributed Systems

Linux on IBM Power Systems
IBM AIX
Microsoft Windows
Unix
Linux

IBM Db2
IBM IMS/DB / DL1
VSAM
Software AG ADABAS
CA IDMS/DB
CA DATACOM/DB
PDS/PS

IBM Db2 LUW
IBM BLU Acceleration
IBM Informix
IBM NETEZZA
Oracle
Sybase
Microsoft SQL Server
Software AG ADABAS LUW
PostgreSQL
Teradata
MongoDB
Flat File Integration
SAP Hana
MySQL / MariaDB

JSON
Avro
KAFKA
with Avro
with CSV
with JSON
Hadoop Data Lakes
HDFS
CSV

Cloud

Aurora MySQL
Aurora PostgreSQL
AWS S3
Amazon Web Services
Amazon Kinesis
Microsoft Azure
Amazon Kinesis
Azure SQL-Database
Azure Database for MySQL/MariaDB
Azure Database for PostgreSQL
Azure Event Hubs

• Data exchange in the sense of real-time synchronisation and replication turns into a “Single Step Operation” with tcVISION.

• No additional middle-ware is required.

• Diverse Change Data Capture technologies allow an efficient selection of the required data from the source system with focus on the changed data. The data exchange process is reduced to the necessary minimum which results in lower costs for the data exchange.

• tcVISION can also use backup and recovery files (e.g. imagecopies, log files, UNLOADs) as a source for replication. Production data does not need to be touched.

• tcVISION enables the fast and efficient load of large volumes of mainframe data into Big Data (streaming). The processor costs of the mainframe are low and negligible.

• An integrated data repository guarantees transparent data management across platforms.

• Mainframe knowledge is not necessarily required for the replication.

• tcVISION includes a rule engine to transform data into a target compliant format or allows user specific processing via supplied APIs.

• The integrated staging concept supports the offload of changed data in “raw format” to less expensive processor systems. This reduces costs and mainframe processor resources to a minimum.

• The preparation of the data for the target system can be performed on a less expensive platform (Linux, UNIX or MS-Windows).

• The transfer to and streaming of data into Big Data is part of the tcVISION data exchange process. No intermediate files are required.

• The exchange of large volumes of data between a production mainframe environment and Big Data can run in parallel processes to reduce latencies to a minimum.

• The tcVISION integration platform contains comprehensive control mechanisms and monitoring functions for an automated data exchange.

• tcVISION has been designed in a way that Big Data based projects can be deployed with complete project autonomy and maximum reduction of mainframe resources.

Big Data Image 2

tcVISION with Big Data Business Functions

 

Business Reporting
Data Quality Management
SOA
Hadoop Clusters
Business Intelligence
Data Warehousing
Mainframe Offload
Migration of Data & Systems
Real Time Analytics
Big Data Management
Application Modernisation
Cloud Technologies

tcVISION Big Data Benefit

With tcVISION, data synchronisation between mainframe and Big Data pays off:

• Near real-time replication of mainframe data to Big Data allows actual real-time analytics.

• The relocation of mainframe applications (e.g. internet applications like online banking, e-Government, etc.) to Big Data with synchronous data on both platforms is also possible.

• Because of the concentration on changed data the costs of the data exchange are reduced to a minimum.

• The utilisation of mainframe resources is reduced as far as possible to avoid costs for mainframe knowledge and mainframe MIPS.

• Data exchange processes can be deployed and maintained with tcVISION without mainframe knowledge, thus costs can be saved and Big Data projects can be developed and put to production faster.

• The near real-time replication of tcVISION from mainframe to Big Data allows the relocation of BI, reporting and analytic applications to the more cost efficient and – for these applications – more powerful Big Data platform.

• Compensation of decreasing mainframe knowledge.

• Real-time replication of mainframe data to Big Data enables real-time analytics, offloading mainframe application functionality (e.g., online banking queries, e-Government, etc.) to Big Data with data synchronised between the platforms.

• Replication costs are minimised as only changes are exchanged

• Data exchange processes can be designed, deployed and maintained with tcVISION without mainframe knowledge, providing cost savings, quicker delivery and project autonomy in Big Data initiatives.

• Reporting and analytics applications are more comprehensive and valuable when mainframe data can be included in the Big Data platform.

Data Synchronisation | Data Replication | Data Integration


Data Synchronisation Areas of application: Real-time analytics, big data, data warehousing, reporting, business intelligence, data quality management, application modernization, work offload to reduce the mainframe’s workload, migration of data and systems, usage of cloud technologies, SOA, etc.


Data integration combine data from across your enterprise into meaningful and valuable information. tcVISION is a complete data integration solution that delivers trusted data from a variety of data bases and files from various operating platforms.


Data Duplication: – Controlled data exchange tcVISION provides many mechanisms to control data flow with the very efficient Efficient bulk transfer for Mass data transportation for initial load or cyclic data exchange.

Coexistency | Migration | Modernisation | Analytics & big data

Coexistency: – Synchronisation of data in a heterogeneous system environment consisting of a mainframe and distributed systems.

Migration: – Gradual migration of data and applications in heterogeneous system environments.

Modernisation: – Mainframe relief: Transfer of mainframe data to distributed systems or Hadoop Data Lakes.

Analytics & big data: – ETL of mainframe data for Data Warehousing, Business Intelligence, Analytics & big data.

Cost Reduction

• Relocation of data exchange processes from the mainframe to more cost-effective platforms (e.g. Linux, UNIX, Windows or the cloud).

• Compressed data transfers in raw format.

• Prevention of mainframe costs: Backup and image copies to relieve the production mainframe.

• No additional Middle-ware required elimination of costs and implementation effort more efficient transport layer.

• Data transfer volume is reduced to a minimum through focus on changed data (Changed Data Capture).

• Automatic collision detection and loop-back prevention for bidirectional replication to prevent undesired back flow of data to the source of the change.

Our Partners

Fujitsu Partner
Oracle Partner
HP-PARTNER
IBM-Partner-Emblem-Web
BOS Logo
Microsoft Partner
Useful Links:
HOMEtcVISION ArchitecturetcVISION OverviewtcVISION Frequent QuestionstcVISION – Amazon Web ServicestcVISION Big DataReal Time Data Replication
Mainframe to CloudData SynchronisationDownloadstcVISION z/StreamtcACCESSNewsRead MoreDEMOPrivacy PolicyContact us
tcVISION is distributed in Australia and New Zealand by CCA Software Pty. Ltd. a division of FreeSoft Asia-Pacific
All Rights Reserved Copyright © 2019 Privacy Policy
error: Alert: Content is protected !!