Tsm option file parameters




















This is similar to the multiple backup session support. The Multi-Session restore exploits the mount points available on the server. If the data to be restored resides on several tapes, and there are sufficient mount points available, and the restore is done using the no query restore protocol, then multiple sessions can be used to restore the data.

If there are not enough mount points available, only the client, not the server, will issue the ANSE message. Since the number of sessions increase when performing the Multi-Session restore, please use a proper setting for the dsmserv. See Resourceutilization for more information. Incremental backup throughput does not improve for a single filesystem with a small amount of changed data.

This is good for workload balancing. This is not so good if backing up direct to tape storage pool collocated by filespace — do not use multisession client to backup directly to a storage pool collocated by filespace — use multiple commands, one per filespace. Resourceutilization is a client option used to regulate the level of resources i.

Even though the multi-session function is transparent to the end user and is started automatically, there are some parameters which enable the user to customize the function. This option increases or decreases the ability of the Tivoli Storage Manager client to create multiple sessions. However, this setting specifies the level of resources the Tivoli StorageManager server and client can use during backup or archive processing.

The higher the value, the more sessions the client can start if it deems necessary. The range for the parameter is from 1 to When the option is not set, which is the default, then only two sessions are created to the server. Default resourceutilization level allows up to two sessions with the server, one for querying the server and one for sending file data.

The relationship between Resourseutilization and the maximum number of sessions created is part of an internalized algorithm and, as such, is subject to change. This table lists the relationships between Resourceutilization values and the maximum sessions created. Producer sessions scan the client system for eligible files. The remaining sessions are consumer sessions and are used for data transfer.

The threshold value affects how quickly new sessions are created. RU Value. Max of Sessions. Unique of Producer Sessions. Threshold secs. Backup throughput improvements that can be achieved by increasing the Resourceutilization level will vary from client node to client node. Factors that affect throughputs of multiple sessions include the configuration of the client storage subsystem i.

The total number of parallel sessions for a client is counted for the maximum number of sessions allowed with the server. This will enable multiple sessions with the server during backup or archive, and can result in substantial throughput improvements in some cases.

It is not likely to improve incremental backup of a single large filesystem with a small percentage of changed data. If backup is direct to tape, then the client node maximum mount points allowed parameter, MAXNUMMP, must also be updated at the server using the update node command. When a restore is requested, the default is to use a maximum of two sessions, based on how many tapes the requested data is stored on, how many tape drives are available, and the maximum number of mount points allowed for the node.

For example, if the data to be restored are on five different tape volumes, and the maximum number of mount points is five for the node requesting the restore, and RESOURceutilization is set to three, then three sessions will be used for the restore. The TAPEPrompt option specifies if you want TSM to wait for a tape to be mounted for a backup, archive, restore or retrieve operation, or prompt you for your choice. The TCPBuffsize option specifies the size of the internal TCP communication buffer, that is used to transfer data between the client node and the server.

A large buffer can improve communication performance, but requires more memory. Specifies the size of the buffer in kilobytes.

The default is 31KB, and the maximum is now KB. The default is Yes on the server. This option specifies the size of the TCP sliding window in kilobytes.

The default is 32KB, and the maximum is KB. The txnbytelimit option specifies the number of kilobytes the client program buffers before it sends a transaction to the server. The range of values is through 2 GB ; the default is Note: This option can also be defined and adjusted by the server as required during self-tuning operations. A transaction is the unit of work exchanged between the client and server. Because the client program can transfer more than one file or directory between the client and server before it commits the data to server storage, a transaction can contain more than one file or directory.

This is called a transaction group. This option permits you to control the amount of data sent between the client and server before the server commits the data and changes to the server database, thus changing the speed with which the client performs work. The amount of data sent applies when files are batched together during backup or when receiving files from the server during a restore procedure. The server administrator can limit the number of files or directories contained within a group transaction using the txngroupmax option Refer to "TXNGroupmax".

Once this number is reached, the client sends the files to the server even if the transaction byte limit is not reached. Also note that a larger log may result in longer server start-up times. When setting the size of transactions consider setting a smaller size if you are suffering many resends due to files changing during backup when using static, shared static, or shared dynamic.

Additionally we can recommend for small file workloads that the customer first stage data to a disk storage pool and then migrate to LTO. This is beneficial for workstations with large file systems on multiple disks. The use of virtual mountpoints will also limit the amount of data that must be backed up. Compressing the data on the client reduces demand on the network and on the TSM server.

For Windows clients. Instead of cross referencing current state of files with the TSM database we just backup files indicated as changed in the change journal. It is much faster than classic incremental but improvement depends on amount of changed data. Also requires less memory usage and less disk usage.

Requires the installation of a Tivoli Journal Engine Service which monitors filesystems activity for file changes. Impacts performance of the filesystem slightly. Journal options specified in tsmjbbd. Defaults work well, just add the filesystems to be monitored. The ifconfig command can be used to change the MTU size value. If the adapter receive buffer sizes can be configured to larger values, increasing the MTU size on the workstation is recommended.

As discussed earlier if the frame is going through networks with smaller MTU's, increasing the MTU size may degrade performance because of overhead encountered in disassembling packets by routers and assembling packets at the destination.

The default is 1. The minimum is 0, and there is no maximum value. If set to less than 1, this parameter could have a negative impact on performance.

After changing the parameters, you need to rebuild the kernel. The parameters that can affect performance are:. To avoid fragmentation, the conservative value of is used. For example, settings of , , , or can be used. Specifies the number of bytes that the user can send to a TCP socket buffer before being blocked. The default value is A group of options exists which can be used with specific commands on the command line only. When specifying options with a command, always precede the option with a dash -.

Two command line options which may improve TSM performance are:. It is also important to note here that the command line interface is generally faster than the GUI and requires less overhead. This option is used in conjunction with the restore command, and restores files only if the server date is newer than the date of the local file. This option may result in lower network utilization if less data must travel across the network.

In a regular incremental backup, the server reads the attributes of all the files in the filesystem and passes this information to the client. The client then compares the server list to a list of its current file system. These clients usually have a limited amount of memory. With an incremental by date backup, the server only passes the date of the last successful backup.

Now it is no longer necessary to query every active file on the TSM server. The time savings are significant. However, periodic regular incrementals are still needed to backup files that have only had their attributes changes. For example: If a new file in your file system has a creation date previous to the last successful backup date. Many customers have a requirement to run AIX clients and servers locally on the same processor.

The shared memory protocol should be used when running clients on the same system as the server. The shared memory protocol uses system resources more efficiently. The following parameters should be specified in the dsmserv.

Specifies the communication method between the TSM client and server. If the client and server are on the same system, use the shared memory protocol. Database mirroring provides higher reliability, but comes at a cost in performance especially with parallel mirroring. To minimize the impact of database write activity, use disk subsystems with non-volatile write caching ability I. This is true even if there appears to be plenty of bandwidth left on the database disks.

Configure one TSM volume per physical disk, or at most two. Separate the recovery log, database, disk storage pool volumes. Place TSM volumes at the outside diameter of physical disk.

This gives better sequential throughput and faster seek time. Due to lack of read ahead. Be sure to consider the write penalty of RAID5 arrays. TSM recovery log and database mirroring provides better recoverability than hardware redundancy. TSM server database: Use multiple physical disks. Divide database size equally over all volumes.

Place volumes at disk outer diameter to minimize seek time. Use optimal database bufferpool size. Use mirrorwrite db parallel and use database page shadow to reduce overhead of mirrored writes. Use multiple recovery log, database volumes. If possible, limit the number of versions of any backup file to what is really needed. File backup performance will degrade when there are multiple versions of an object.

The default number of backup versions is 2. TSM servers have a facility to backup and restore the storage pools for disaster recovery. Using cached disk storage pools can increase restore performance by avoiding tape mounts. The benefit is seen for restoring files that were recently backed up. If the disk pool is large enough to hold a day's worth of data, then caching is a good option.

If this condition is suspected, our recommendation is to turn disk storage pool caching off. To clear your cached files:. Files will be moved to other volumes within the same storage pool and cached files will be deleted.

When data is migrated from disk to tape, multiple processes can be used if multiple tape drives are available. In some cases, this can improve the time to empty the disk storage volumes, since each migration process works on data for different client nodes. The default value is 1. Tuning the migration thresholds may help improve performance. If the thresholds are set too high, migration will be delayed. This can cause the AD SM disk storage volumes to fill, and when a client attempts to send data to the disk storage volume it will see the full condition and attempt to go to the volume indicated at the next level in the storage hierarchy.

If this is a tape volume, then it may be in use by a migration process, in which case the client session will wait on the tape media to be freed by the migration process.

The client then just sits idle. In this case, the migration thresholds should be lowered so migration starts earlier, or m ore disk space should be allocated to the TSM disk storage pool. They are as follows:. This overrides the client setting. Using collocation will significantly improve the performance of restores for large amounts of data, since fewer tapes will be searched for the necessary data. It also decreases the chance for media contention with other clients. The trade-off is that more tapes will be needed.

The default for a storage pool is no collocation. It is possible to migrate and recall selected files using HSM. However, you should use that method, of manually storing and accessing files, only when you are working with small numbers of files. This is because HSM works on one file at a time, unlike archive, retrieve, restore and backup which group files at a transaction boundary.

For a group of small files, it is better to use archive or backup to store them to the server. If you have to migrate a group of small files to the server, you will get better performance if you go to disk rather than tape. Once the files are HSM migrated to disk, then you can use storage pool migration to move the files to tape.

Many times when performance problems occur, an abnormal system condition may have been the cause. Often the cause of these problems can be determined by examining TSM server activity logs, client error file or the appropriate system logs for your operating system. TSM throughput can degrade if all client backups are started simultaneously. It is best to avoid concurrent backups, and it would be better to spread out the number of TSM client over a period of time by using the randomizing feature of scheduling.

B y scheduling backups appropriately, server performance may improve. Please ensure the resolving ptf's are installed on your system. Raw Logical Volumes. Not all Gb Ethernet hardware supports jumbo frames. Jumbo frames can give improved throughput and lower host CPU usage.

This modifies the AIX read-ahead options, so make sure this does not degrade other applications. This can cause paging of the database buffer pool leading to slow database performance. Exception: RAM constrained systems. Exception: database bufferpool size is too large. Lower further if not effective change realtime. As maxperm approaches minperm consider lowering minperm as well. Watch vmstat for progress, if pageouts go to zero pageins's will eventually lower as well. If client and server are on the same processor use the vmtune parameter vmtune-R -F -c 1.

On the other hand when reading from server disk volume, RLV will not use read ahead mechanism. This may result in poorer performance on restores and server move operations from disk to tape. Instead use TSM mirroring facilities. For example, the Skulker program should not be used. It's easy to estimate the throughput for workloads with average file sizes different from those that were tested in our performance lab.

However, the overall TSM environment must conform to one of the environments in one of our evaluation reports. The first step is to find the table in an evaluation report for the TSM function and environment that matches your specific requirements. The next step is to determine the average file size of the client workload for which the estimate is to be made.

Throughput is effectively limited by the number of files that can be processed in a given amount of time. If the average file size is greater than 1 KB and less than MB, then calculate the throughput using the two known measurement points in the table that most closely bound the estimate point. Obtain the following values from the table for the function and environment of interest:. LowerFileSize - average file size in KB at the lower measurement point.

UpperFileSize - average file size in KB at the upper measurement point. Estimating throughput for environments that have not been directly tested can be more difficult.

However, the following important observations can be made:. Efficiency indicates the percentage of maximum throughput rate that can realistically be achieved. This leads to the following maximum throughputs that can be obtained for given networks:. In many cases, enabling compaction at the tape drive will improve TSM throughput. To enable the tape drive to use compaction, set the appropriate recording format at the tape drive. The default is DRIVE, which specifies that TSM selects the highest format that can be supported by the sequential access drive on which a volume is mounted, which usually allows the tape control unit to perform compaction.

If you do not use compression at the client and your data is compressible, you should achieve higher system throughput if you use a compaction at the tape control unit. Refer to the appropriate TSM Administrators Guide for more information concerning your specific tape drive. If you compress the data at the client, we recommend that you not use compaction at the tape drive.

For example, when writing to a tape drive, normally the drive returns control to the application when the data is in the tape drive's buffer, but before the data has actually been written to tape. This mode of operation provides all tape drives a significant performance improvement. However, the drive's buffer is volatile. If the application wants to be absolutely sure the write makes it to tape, the application needs to flush the buffer. When writing to a tape drive, network bandwidth must be considered.

Therefore, obviously, you will not be able to backup to LTO or any tape drive any faster than that. If TSM clients have, on average, files smaller than KB it is recommended that these clients backup to a disk storage pool for later migration to tape. This allows more efficient data movement to tape. This manual can be found at url:. However, there is still a performance penalty with writes. For the size of writes that the TSM DB is concerned, the cached controllers may give somewhere in the order of 2x performance over their non-cached counterparts.

Some will say that it's not a big deal, so RAID-5 is a reasonable consideration. It comes down to finding the throughput bottleneck. In these cases, RAID-5 is probably not a good choice. The AIX location codes for these slots were:. For example if you are going to do a lot of backups to disk you probably don't want your network card and disk adaptor on the same PCI bus. Though you should be able to get close in most cases. The archive function is a powerful way to store inactive data with a fixed retention time.

However, some TSM users have been frustrated when using the archive function because of inadequate performance or a perceived lack of functionality. This paper will review the functional changes that the archive function has undergone plus provide some implementation advice for those TSM users who plan to use TSM archives.

This could be individual files or groups of files. Archive will work also for large-scale archive of an entire system. ADSM Version 1 would rebuild directories on retrieve, if necessary, but they were rebuilt with default properties. Also, the default Description was changed to be Date and Time stamp.

To improve TSM database performance, new internal tables were added for indexing by Description. These tables were populated when a TSM client first used the Version 3. To further improve the overall functionality, archive of directories was added.

To ensure that a directory would not be expired before all of its archived file entries were expired, directories were, by rule, assigned the Management Class with the longest retention. If clients repeatedly archived the same set of files or used command file driven archives that repeatedly invoked the CLI, they could see explosive growth of the TSM Database and many duplicates of archived directories.

The problem was further compounded by the fact that directories were assigned to the Management Class with the longest retention. At the Version 3. As part of archiving a file, the client first queries the server to see if a directory archive exists Description was included as part of the unique directory identification. Also, the timestamp was removed from the default Description. Several changes were made on the TSM server at this code level.

Utilities were created to clean up the excess directory archives. Inventory Expiration was changed such that it would check to see if an archived directory was referenced before that directory would be expired. Since this ensured that no directory would be expired before its files were expired, the directory archives were now assigned to the Management Class with the shortest retention. Unfortunately, most pre-Version 3.

The other changes had the effect of slowing the Archives and Inventory Expirations especially where there already were many directory archives. With Version 4. A new internal table was added to the TSM database to improve search time for large numbers of archives of files with the same Description. These functions were also built into the TSM Version 3.

The first step in implementing archive should be to determine if archive is indeed the proper function for the business problem. Archive of a set of business related files, even on a repeated basis, is appropriate. Some archive practices that should be avoided are:. These kinds of archives tend to exacerbate the problems described above by flooding the TSM Database with many directory and file entries.

Using Archive to accomplish tape rotation is also likely to create problems with excessive directory and file entries in the TSM Database.

A more efficient way to do this by using the client Backup function and a series of Storage Pools. Backups are directed to the Storage Pools in a round-robin fashion. When it is time to recover the tape volumes from the off-site location, all the volumes are deleted from that Storage Pool. Whenever possible, Backupsets should be used as a replacement for archives. If Backupsets are not appropriate then another option might be to aggregate the files and directories before archiving them i.

PKZIP, tar, etc. You can view pending changes using tsm pending-changes list. For more information, see tsm pending-changes. Required, along with -u or --username if no session is active. Use the specified address for Tableau Services Manager.

Use this flag to trust the self-signed certificate on the TSM controller. Specify a user account. If you do not include this option, the command is run using credentials you signed in with. Tableau Server on Linux Help. Version: Alternatively, you can also adjust the maximum number of external embedded assets that can be deleted using databaseservice.

For more information see, Troubleshoot missing content. Controls whether Desktop License Reporting is enabled on the server. When set to false the default , no Administrative Views related to desktop licenses are available. Set this to true to enable license reporting and to make license usage and expiration Administrative Views visible on the Server Status page.

Note: Desktop License Reporting must be enabled on the client Tableau Desktop in order for information to be reported to Tableau Server. Controls whether Tableau Server allows embedded credentials in bootstrap files. When enabled the default , embedded credentials are included in the bootstrap file unless you specify that they should not be included. Set this to false if credentials should never be included in any bootstrap file you generate. For more information on generating bootstrap files, see tsm topology nodes get-bootstrap-file.

Applies only to servers that use local authentication. Set to true to let users reset their passwords with a "Forgot password" option on the sign-in page. Note: Added in This is not yet available in The logging level for File Store. To disable caching of Tableau Server data on the client, set this option to true.

By default, HSTS policy is set for one year seconds. This time period specifies the amount of time in which the browser will access the server over HTTPS. By default this is set to notice. Other options include debug , info , warning , error. If you change the logging level, be aware of potential impact to disk space usage and performance. As a best practice, return the logging level to the default after you have gathered the information you need.

The maximum size bytes of header content that is allowed to pass through the Apache gateway on HTTP requests. A low value for gateway. Be sure to test HTTP authentication scenarios before deploying into production. We recommend setting tomcat. The browser will then display the content accordingly. This process is referred to as "sniffing. The logging level for Gateway. If Tableau Server is configured to work with a proxy server or external load balancer, it is the name entered in a browser address bar to reach Tableau Server.

For example, if Tableau Server is reached by entering tableau. Note: This will not eliminate the threat of such attacks, and could have the unintended impact of terminating slow connections.

When enabled by the preceding option, gateway. The primary use of this option is as a defense the Slowloris attack. See the Wikipedia entry, Slowloris computer security Link opens in a new window. Applies to proxy server environments only. The IP address es or host name s of the proxy server. This option is used to set the disk space limit for a query that spools to disk. If your disk space usage by the spool. Use this option to limit the amount of disk space that any one query can use. The spool.

For example, you can specify the size limit as G when you want to limit the disk space usage to GB. This option is used to set the disk space limit for all queries that spool to disk. Use this option to limit the amount of disk space in sum total that all queries use when spooling to disk. Tableau recommends that you start with this configuration when fine tuning your spooling limits. By default query information is logged. If however you find that the log files are too large for the amount of disk space available, you can set it to false to disable logging query information.

Tableau recommends leaving this configuration set to true. This setting is useful to find out more information about the queries, like compilation and parsing times. By default this setting is disabled. You can turn this by setting the value to true to collect more details about your queries.

When set to true , logs query plans of query that are identified as problematic. Queries that are either canceled, running slower than 10 seconds, or if the queries are spooling to disk fall into this category. The information in the logs can be useful to troubleshoot problematic queries. You can change the setting to false if you are concerned about the size of the logs. Controls the maximum amount of memory used by Hyper. Specify the number of bytes. Append the letter 'k' to the value to indicate kilobytes, 'm' to indicate megabytes, 'g' to indicate gigabytes, or 't' to indicate terabytes.

For example, hyper. Alternatively, specify the memory limit as a percentage of the overall available system memory. This setting only applies to Windows. Hyper keeps decompressed and decrypted parts of the extract in memory to make subsequent accesses faster. This setting controls when worker threads will start writing this data out to a disk cache to reduce memory pressure. If given as a percentage, the value is interpreted as a percentage of the overall hyper. The value should be larger than the hyper.

When interacting with a Hyper file, Hyper will write out some data for caching or persisting the data. Windows has the special behavior that it locks freshly written data into memory.

To avoid swapping, we force out the data when Hyper reaches the configured limit for the reclaim threshold. When the soft reclaim threshold is reached, Hyper will try to reclaim cached data in the background to attempt to stay below the reclaim threshold. In situations where swapping would happen otherwise, triggering reclamation in Hyper can lead to a better outcome. Therefore, if your Tableau Server installation experiences a lot of swapping, this setting can be used to attempt to reduce the memory pressure.

Alternatively, specify the value as a percentage of the overall configured memory for Hyper. Controls the number of network threads used by Hyper. Specify either the number of network threads for example, hyper. Network threads are used for accepting new connections and sending or receiving data and queries. Hyper uses asynchronous networking, so many connections can be served by a single thread.

Normally, the amount of work that is done on network threads is very low. The one exception is opening databases on slow file systems, which can take a long time and block the network thread. A boolean setting that controls file integrity checks in Hyper. When set to true , Hyper will check the data in an extract file when it is first accessed. This allows silent corruption and corruption that would crash Hyper to be detected.

In general, it is advisable to turn this setting on except for installations with very slow disks where it could cause performance regressions.

Sets an upper bound on the total thread time that can be used by individual queries in Hyper. Append 's' to the value to indicate seconds, 'min' to indicate minutes, or 'h' to indicate hours. For example, to restrict all queries to a total time usage of seconds of total thread time: hyper. If a query runs longer then the specified limit, the query will fail and an error will be returned.

This setting allows you to automatically control runaway queries that would otherwise use too many resources. Hyper executes queries in parallel. For example, if a query executes for seconds and during this time is running on 30 threads, the total thread time would be seconds. Controls the maximum memory consumption that an individual query can have. Alternatively, specify the session memory limit as a percentage of the overall available system memory.

Lowering this value can help when a query is using excessive amounts of memory and making other queries fail over a long period of time.

Improves the chance that the extract for a query is already cached. If the node with the extract cached cannot support additional load, you will be routed to a new node and the extract will be loaded into cache on the new node.

This results in better system utilization because extracts are only loaded into memory if there is load that justifies the need.

Switches the load balancing metric from random selection to picking the Data Engine Hyper node based on a health score that is made of up of a combination of current Hyper activity and system resource usage.

Based on these values, the load balancer will pick the node that is most capable of handling an extract query. Sets the upper limit of disk space at which Hyper will stop allocating space for temporary files. This setting can help to stop the hard disk from filling up with temporary files from Hyper and running out of disk space. If disk space reaches this threshold, Hyper will attempt to recover automatically without administrator intervention. Specify it as percentage of the overall available disk space to be used.

For Data Engine to start, the configured amount of disk space must be available. Please free up disk space on the device. Use this option to set the maximum number of threads Hyper should use for running queries.

Use this when you want to set a hard limit on the CPU usage. Specify either the number of threads or specify the percentage of threads in relation to the logical core count. Hyper will most likely not use more resources than are configured by this setting but Hyper background and network threads are not affected by this setting though they tend to not be CPU intensive.

It is important to consider that this setting controls the number of concurrent queries that can be executed. So, if you decrease this setting, the chance of queries needing to wait for currently running queries to complete increases, which may affect workbook load times. Let's say you set this value to 10 threads, this means queries can be parallelized up to 10 threads.

If only 2 queries are running, the remaining 8 threads are used to parallelize the 2 queries. The hyper. The soft limit is a way for you to limit CPU usage but allow it to go beyond the soft limit up to the hard limit if necessary.

Note: The hyper. For information on the hyper. Link opens in a new window. In other words, it allows Hyper to execute a query using the disk if it exceeds RAM usage. Tableau recommends that you use the default setting.

You can turn this off by setting the value to false if you are concerned about disk usage. Spooling queries usually take substantially longer to finish. Set to the duration in seconds that a user's login-based license can be offline with no connection to Tableau Server before they are prompted to activate again.

This duration is always refreshed when Tableau Desktop is in use and can connect to Tableau Server. Set to true to enable login-based license management. Set to false to disable login-based license management. Note: In order to use login-based license management , you must activate a product key that is enabled for login-based license management. You can use the tsm licenses list to see which product keys have login-based license management enabled.

The maximum value is seconds days. Sets the maximum number of rows for sampling data from large data sets with Tableau Prep on the web. By default, access to any directory will be denied, and only publishing to Tableau Server with content that is included in the tflx file is allowed. A list of allowed network directories for flow input connections.

For more information, see Tableau Prep Conductor. Paths should be accessible by Tableau Server. These paths are verified during server startup and at flow run time. Network directory paths have to be absolute and cannot contain wildcards or other path traversing symbols. Important: This command overwrites existing information and replaces it with the new information you provided. If you want to add a new location to an existing list, you must provide a list of all the locations, existing and the new one you want to add.

Use the following commands to see the current list of input and output locations: tsm configuration get -k maestro. For more information and details about configuring allowed directories for flow input and output connections, see Safe list Input and Output Locations Link opens in a new window.

A list of allowed network directories for flow output connections. When configured, Tableau Catalog blocks specified content from being ingested. Blocklist values must be separated by a comma. Important: You should only use this option when directed to do so by Tableau Support. For example, you can use the tsm configuration set --force-keys -k metadata.

Controls whether indexing of new and updated content, also called eventing, is regulated across all sites on the server. By default, event throttling is turned off. To turn on event throttling, change this setting to true using the following command:. For more information about event throttling, see Enable Tableau Catalog. When event throttling is enabled, this is the maximum number of new and updated content items that can be indexed during a specified period of time.

Once the specified limit is reached for a specific item, indexing is deferred. By default, the limit is set to 20 and can't be set to lower than 2. You can use the following command to change the limit:. Throttled events can be identified in the server "noninteractive" log files as ingestor event flagged for removal by throttle filter.

When event throttling is enabled, this is the period of time, in minutes, a specified maximum number of new and updated content items can be indexed. Once the specified time is reached, indexing of any additional new and updated content is deferred.

By default, the time is set to 30 minutes. You can use the following command to change the time:. This is the longest allowable time, in seconds, for a Catalog or Metadata API query to run before a timeout occurs and the query is canceled.

Tableau recommends incrementally increasing the timeout limit to no more than 60 seconds using the following command:.

Important: This option should be changed only if you see the error described here, Timeout limit and node limit exceeded messages. Increasing the timeout limit can utilize more CPU for longer, which can impact the performance of tasks across Tableau Server. Increasing the timeout limit can also cause higher memory usage, which can cause issues with the interactive microservices container when queries run in parallel.

This is the number of objects which can loosely map to the number of query results that Catalog can return before the node limit is exceeded and the query is canceled.

Tableau recommends incrementally increasing the timeout limit, to no more than , using the following command:. Increasing the node limit can cause higher memory usage, which can cause issues with the interactive microservices container when queries run in parallel. Controls the interval, in minutes, between refreshes for metrics that rely on live data sources. Controls the number of consecutive refresh failures that must occur before the metric owner is warned. When set to the default of 10, a metric refresh must fail 10 times in a row before the owner is sent a notification about the failure.

Controls the number of consecutive refresh failures that must occur before a metric refresh is suspended. Controls whether links to Tableau Server are treated as deep links by the Tableau Mobile app. When set to true , links to supported content types open in the app. When set to false , links open in the mobile browser.

For more information see, Control deep linking for Tableau Mobile. The length of time, in milliseconds, that Cluster Controller will wait for the data engine, before determining that a connection timeout occurred. The default is 30, milliseconds 30 seconds. Set parallel query limit for the specified data source connection class.

This overrides the global limit for the data source. Global limit for parallel queries. Default is 16 except for Amazon Redshift which has a default of 8. This option controls whether Explain Data is enabled or disabled for the server. Override the operation restrictions when joining data from a single file connection and a single SQL database connection.

Set this option to True to force Tableau to process the join using the live database connection. The name format was changed in version If this causes problems with existing configurations and you don't need cross-domain protocol transition, configure Tableau Server to use the old behavior by setting this to true.

Each path must also be referenced in a corresponding auto. Separate each path by a semicolon, for example:. Therefore, each time you add a Windows share, you must include all shares in the updated value. For more information, see the Community wiki topic, Connecting to a Windows Shared Directory Link opens in a new window. Controls whether the query cache size is initialized automatically based on the amount of available system memory.

The query cache consists of the logical query cache, metadata cache, and native query cache.



0コメント

  • 1000 / 1000