Workspace & Marketplace 2.20 release notes
Highlights
New Location Services
We are happy to announce the availability of new services on platform.here.com. These are ready to use, but do note that charges will apply. All new HERE Location services are billed at a rate of $/€0.0005 per transaction except for Japan API transactions which are priced at $/€0.00065. See the transaction definition for each service in the chart below.
Please see the entire services suite at https://platform.here.com/services/
Service
|
Transaction Definition
|
---|---|
Map Tile |
1 Transaction is counted for every 15 Requests. |
Traffic Tile |
1 Transaction is counted for every 15 Requests. |
Aerial Tile |
1 Transaction is counted for every 15 Requests. |
Map Image |
1 Transaction is counted for each Request. |
Weather |
1 Transaction is counted for each Request. |
Traffic Flow and Incident |
1 Transaction is counted for each Request. |
Positioning |
1 Transaction is counted for each Request. |
Route Matching |
1 Transaction is counted for each Request. |
Fleet Telematics |
1 Transaction is counted for each Request with exceptions* |
Exceptions:
- Fleet Telematics: 1 Transaction is counted for each individual Service feature included in a Request.
- Fleet Telematics Custom Routes: 1 Transaction is counted for each routing Request.
- Fleet Telematics ETA Tracking: 2 Transactions are counted for each ETA update Request.
- Fleet Telematics Custom Locations: One Transaction is counted for each location, points of interest, polygon or finding a POI along a route in a Request.
New SDKs
We are happy to announce the availability of new SDKs on platform.here.com. These are ready to use, but do note that charges will apply. These HERE SDKs services are billed based on transactions at a rate of $/€0.0005 per transaction except for Japan API transactions which are priced at $/€0.00065.
Please see the entire services suite at https://platform.here.com/sdk
SDK
|
Billed on
|
---|---|
HERE SDK Lite Edition for Android | Transactions |
HERE SDK Lite Edition for iOS | Transactions |
HERE SDK Explore Edition for Android | Transactions |
HERE SDK Explore Edition for iOS | Transactions |
HERE SDK Explore Edition for Flutter | Transactions |
Improved Platform Portal login
Org ID is no longer required when logging into the platform portal, except in cases where a users belongs to more than one organization.
It's also now possible for a pending user to request a new invitation in the login flow, eliminating the need for and org admin to re-invite users.
For users in multiple organizations, we've added "forgot your Org ID" functionality which emails users a list of the organizations to which they belong.
Change layer configurations after a layer is created to correct mistakes without needing to delete and recreate the layer
It is now possible to edit certain layer configurations after a layer has been created and even after the parent catalog as been marked as "Marketplace Ready". This change was made to ease certain changes, mostly to the advantage of developers when making configuration mistakes during layer creation or who want to change the configuration during CI/CD testing. It is important to note that making certain layer configuration changes after data is stored in the layer and/or after the parent catalog is available in Marketplace is very risky and can lead to irrecoverable impacts on data. Such changes should be made with great caution and understanding of the ramifications. Learn more here. The following configurations are now mutable via API, Data Client Library and CLI after a layer has been created:
- Stream layer configurations: Throughput, retention (TTL), content type, content encoding (compression) and schema association.
- Index layer configurations: Retention (TTL) and schema association
- Versioned layer configurations: Schema association
Volatile layer configuration changes as well as support for this functionality via the Portal will be delivered in a future release.
Decode and encode protobuf partitions with the CLI for more convenience during debugging and testing
You can now use the get
and put
commands for partitioned layers and for stream layers with a flag to have your JSON data encoded to protobuf prior to uploading it or decoded to JSON after downloading it.
Popular testing frameworks integrated in a new data validation module
The Data Validation Library (DVL) has the purpose to enable scalable and efficient testing and validation of versioned catalogs on HERE Platform. In particular, testing of optimized maps and other generated versioned content should be implemented using DVL. Based on the Data Processing Library (DPL), DVL is supposed to abstract the underlying distributed processing on SPARK and Data API in order to facilitate quick creation of large-scale validation pipelines. However, to write tests, knowledge of DPL is required.
To lower the entrance barrier for new platform developers, we now follow a more lightweight approach by means of a new data validation module which integrates popular testing frameworks such as Cucumber, Junit or ScalaTest, allowing test engineers to write the tests without any previous knowledge of the DPL, concepts of partitioning or map compilation. This is achieved by separation of the test-data extraction phase from the test scenarios which are executed in parallel on each self-contained test-data partition. Only the test-extraction phase requires knowledge of DPL features and underlying catalog structure. Once created, it can be used by test engineers without platform experience for implementation of numerous tests. The new module pragmatically computes metrics inside the test scenarios and does the assessment based on a predicate on the aggregated metrics satisfying the vast majority of all testing use cases.
Please note that the above means that we are deprecating the DVL from the SDK for Java & Scala. The new data validation module in the DPL covers the functionality of the DVL.
Changes, additions and known issues
SDKs and tools
To see details of all changes to the CLI, Data SDKs for Python, TypeScript, C++, Java and Scala as well as to the Data Inspector Library, visit the HERE platform changelog
Web and portal
Known issue: Pipeline templates can't be deleted from the platform portal UI.
Workaround: Use the CLI or API to delete pipeline templates.
Known issue: In the platform portal, new jobs and operations are not automatically added to the list of jobs and operations for a pipeline version when the list is open for viewing.
Workaround: Refresh the "Jobs" and "Operations" pages to see the latest job or operation in the list.
Projects and access management
Known issue: A set number of access tokens (~250) is available for each app or user. Depending on the number of resources included, this number may be smaller.
Workaround: Create a new app or user if you reach the limitation.
Known issue: A set number of permissions is allowed for each app or user in the system across all services. This will be reduced depending on the inclusion of resources and types of permissions.
Known issue: All users and apps in a group are granted permissions to perform all actions on any pipeline associated with that group. There's no support for users or apps with limited permissions. For example, you can't have a role that is limited to viewing pipeline statuses, but not starting and stopping a pipeline.
Workaround: Limit the uses in a pipeline group only to those who should have full control over the pipeline.
Known issue: When updating permissions, it can take up to an hour for the changes to take effect.
Known issue: Projects and all resources in a project are designed for use only in HERE Workspace, not the Marketplace. For example, a catalog created in a platform project can only be used in that project. It can't be marked as "Marketplace-ready" nor be listed in the Marketplace.
Workaround: Don't create catalogs in a project that are intended for use in both Workspace and Marketplace.
Data
Known issue: The "Upload data" button in your Layer UI under "More" is hidden when the "Content encoding" field in the layer is set to "gzip".
Workaround: Files (including .zip files) can still be uploaded and downloaded as long as the "Content encoding" field is set to "Uncompressed".
Known issue: The changes released with 2.9 (RoW) and with 2.10 (China) - for adding OrgIDs to catalog HRNs - and with 2.10 (Global) - for adding OrgIDs to schema HRNs - could impact any use case (CI/CD or other) where comparisons are made between HRNs used by various workflow dependencies. For example, requests to compare HRNs that a pipeline is using with those to which a group, user or app has permissions will result in errors if the comparison is expecting results to match the old HRN construct. With this change, data APIs will return only the new HRN construct, which includes the OrgID, e.g. olp-here…, so a comparison between the old HRN and the new HRN will fail.
- Reading from and writing to catalogs using old HRNs will continue to work until this functionality is deprecated (see deprecation notice summary).
- Referencing old schema HRNs will continue to work indefinitely.
Workaround: Update any workflows comparing HRNs to perform the comparison against the new HRN construct, including the OrgID.
Known issue: Searching for a schema in the platform portal using the old HRN construct returns only the latest version of the schema. The portal won't show older versions associated with the old HRN.
Workaround: Search for schemas using the new HRN construct, or look up older versions of schemas using the old HRN construct in the CLI.
Known issue: Visualization of index-layer data isn't supported yet.
Pipelines
Deprecation reminder: The batch-2.0.0 environment will soon be removed as its deprecation period has ended. It would be best to migrate your batch pipelines to the batch-2.1.0 run-time environment to benefit from the latest functionality and improvements.
Known issue: A pipeline failure or exception can sometimes take several minutes to respond.
Known issue: Pipelines can still be activated after a catalog is deleted.
Workaround: The pipeline will fail when it starts running and show an error message about the missing catalog. Find the missing catalog or use a different one.
Known issue: If several pipelines are consuming data from the same stream layer and belong to the same group (pipeline permissions are managed through a group), then each pipeline will only receive a subset of the messages from the stream. This is because, by default, the pipelines share the same application ID.
Workaround: Use the Data Client Library to configure your pipelines so they consume from a single stream. If your pipelines/apps use the Direct Kafka connector, you can specify a Kafka Consumer group ID per pipeline/application. If the Kafka consumer group IDs are unique, the pipelines/apps can consume all the messages from the stream.
If your pipelines use the HTTP connector, create a new group for each pipeline/app, each with its own app ID.
Marketplace (Not available in China)
Known issue: There is no throttling for the beta version of the External Service Gateway. When the system is overloaded, the service slows down for everyone reading from the External Service Gateway.
Workaround: Contact HERE support for help.
Known issue: When the Technical Accounting component is busy, the server can lose usage metrics.
Workaround: If you suspect you're losing usage metrics, contact HERE support to get help with rerunning queries and validating data.
Known issue: Projects and all resources in a project are designed for use only in HERE Workspace and not available for use in HERE Marketplace. For example, a catalog created in a platform project can only be used in that project. It can't be marked as "Marketplace-ready" nor be listed in the Marketplace.
Workaround: Don't create catalogs in a project if intended only for use in the Marketplace.
Summary of active deprecation notices across all components
No. |
Feature summary |
Deprecation period announced (platform release) |
Deprecation period announced (month) |
Deprecation period end |
1 |
OrgID added to catalog HRN (RoW) |
2.9 (ROW) 2.10 (China) |
November 2019 |
February 26, 2021 (extended) |
|
Deprecation summary: |
Catalog HRNs without OrgID will no longer be supported after October 30, 2020.
|
||
4 |
Batch-2.0.0 run-time environment for pipelines |
2.12 |
February 2020 |
August 19, 2020 (past due) |
|
Deprecation summary: |
The deprecation period is over and Batch-2.0.0 will be removed soon. Pipelines still using it will be canceled. Migrate your batch pipelines to the Batch-2.1.0 run-time environment to benefit from the latest functionality and improvements. For more details about migrating a batch pipeline to the new Batch-2.1.0 run-time environment, see Migrate Pipeline to new Run-time Environment. |
||
5 |
Schema validation to be added |
2.13 |
March 2020 |
November 30, 2020 |
|
Deprecation summary: |
For security reasons, the platform will start validating schema reference changes in layer configurations as of November 30, 2020. Schema validation will check if the user or application trying to make a layer configuration change has at least read access to the existing schema associated with that layer (i.e., a user or application cannot reference or use a schema they do not have access to). If the user or application does not have access to a schema associated with any layer after this date, attempts to update configurations of that layer will fail until the schema association or permissions are corrected. Make sure all layers refer only to real, current schemas - or have no schema reference at all - before November 30, 2020. It's possible to use the Config API to remove or change schemas associated with layers to resolve these invalid schema/layer associations. Also, any CI/CD jobs referencing non-existent or inaccessible schemas need to be updated by this date, or they will fail. |
||
6 |
Customizable volatile layer storage capacity and redundancy configurations |
2.14 |
April 2020 |
October 30, 2020 |
|
Deprecation summary: |
The volatile layer configuration option to set storage capacity as a "Package type" will be deprecated by October 30, 2020. All customers should deprecate their existing volatile layers and create new volatile layers with these new configurations by October 30, 2020. |
||
7 | Stream-2.0.0 run-time environment for pipelines | 2.17 | July 2020 | February 1, 2021 |
Deprecation summary: |
Stream-2.0.0 (with Apache Flink 1.7.1) run-time environment is now deprecated. Existing stream pipelines that use the Stream-2.0.0 run-time environment will continue to operate normally until February 1, 2021. During this time, Stream-2.0.0 run-time environment will receive security patches only. For this period, to continue developing pipelines with the Stream-2.0.0 environment, use platform SDK 2.16 or older. After February 1, 2021, the Stream-2.0.0 run-time environment will be removed and pipelines using it will be canceled. Migrate your stream pipelines to the new Stream-3.0.0 run-time environment to benefit from the latest functionality and improvements. For more details about migrating an existing stream pipeline to the new Stream-3.0.0 run-time environment, see Migrate Pipeline to new Run-time Environment. For general support for Apache Flink, please see Stream Pipelines - Apache Flink Support FAQ. |
|||
8 | pipeline_jobs_canceled metric in pipeline status dashboard | 2.17 | July 2020 | February 1, 2021 |
Deprecation summary: | The pipeline_jobs_canceled metric used in the pipeline status dashboard is now deprecated because it was tied to the pause functionality and caused confusion. The metric and its explanation will be available to use until February 1, 2021. Thereafter, the metric will be removed. | |||
9 | Stream throughput configuration changes from MBps to kBps | 2.19 | September 2020 | March 31, 2021 |
Deprecation summary: |
Support for stream layers with configurations in MBps will be deprecated in six months or by March 31, 2021. After March 31, 2021 only kBps throughput configurations will be supported. This means that Data Client Library and CLI versions included in SDK 2.18 and earlier can no longer be used to create stream layers because these versions do not support configuring stream layers in KB/s. |
|||
10 | Monitoring Stability improvements | 2.20 | Oct 2020 | April 30 2021 |
Deprecation summary: |
The "kubernetes_namespace" metric has been deprecated and will be supported until April 30 2021. Update all Grafana dashboard queries using this metric to use the "namespace" metric. The label_values(label) function has been deprecated and will be supported until April 30 2021. Update all Grafana dashboard queries using this function to use label_values(metric, label) The datasource "<Realm>-master-prometheus-datasource" has been deprecated and will be supported until April 30 2021. Update all Grafana dashboards using this datasource to use the Primary datasource. |
|||
11 | Data Validation Library | 2.20 | Oct 2020 | April 30 2021 |
Deprecation summary: | This is the first announcement of the deprecation of the Data Validation Library (DVL) from the SDK for Java & Scala. The new data validation module in the SDK's Data Processing Library covers the functionality of the DVL. |
Have your say
Sign up for our newsletter
Why sign up:
- Latest offers and discounts
- Tailored content delivered weekly
- Exclusive events
- One click to unsubscribe