Term | Description |
Stream computing | Stream computing is the computing of stream data. It reads data in stream form from one or more data sources, efficiently computes the continuous data streams using multiple operators of the engine, and outputs the results to different sinks such as message queues, databases, data warehouses, and storage services. |
Data source | The source that continuously generates data for stream computing. |
Data sink | The destination of the results of stream computing. |
Schema | The structure information of a table, such as column headings and types. In the context of PostgreSQL, a schema is smaller than a database and larger than a table. It can be seen as a namespace inside a database. |
MySQL | A common type of database, which can be used as the data source or sink of an ETL job. |
PostgreSQL | A type of relational database similar to MySQL. |
ClickHouse | A columnar database management system (DBMS) for online analytical processing (OLAP). It can be used as the data sink of an ETL job. |
Elasticsearch | A real-time search and data analytics engine. |
Field mapping | Field mapping is the process of extracting data from a data source, computing and cleansing the data, and then loading the data to the sink. |
Constant field | You can input a custom constant field to the data source and data sink. |
Calculated field | You can perform value conversion or calculation on fields extracted from a data sink using the built-in functions of Stream Compute Service. |
Was this page helpful?