Spark Home. HiveServer2 is an improved version of HiveServer that supports Kerberos authentication and multi-client concurrency. Start hive metastore service. - Data Science with Apache Spark - GitBook Coordinator Node Memory Default HMS Heap Memory Beeline is the command line interface with Hive. Logging during Hive execution on a Hadoop cluster is controlled by Hadoop configuration. Now execute the below-provided command to connect to the Postgres database Server: psql -U postgres. If you want to run the metastore as a network server so it can be accessed from multiple nodes, see Hive Using Derby in Server Mode. Also note that local mode execution is done in a separate, child jvm (of the Hive client). Use a PostgreSQL database as the Hive external metastore on Amazon EMR You can install a stable release of Hive by downloading a tarball, or you can download the source code and build Hive from that. These jobs are then submitted to the Map-Reduce cluster indicated by the variable: While this usually points to a map-reduce cluster with multiple nodes, Hadoop also offers a nifty option to run map-reduce jobs locally on the user's workstation. Top Hive Commands with Examples in HQL | Edureka blog at com.mysql.jdbc.JDBC4Connection. Use the following command to start the server: serverstart serverName where serverNameis the name of the server. How to Run a Mapping with Parameters from the Command Line Running a Mapping with a Parameter Set Running a Mapping with a Parameter File . Now we can do some complex data analysis on the table u_data: Note that if you're using Hive 0.5.0 or earlier you will need to use COUNT(1) in place of COUNT(*). Hive is a regular RDBMS, as well as a schema on reading means. Start hive CLI (command line interface) service with $ hive command on terminal after starting start-dfs.sh daemons we should get hive shell open without any error messages as shown below. ncdu: What's going on with this second size column? Making statements based on opinion; back them up with references or personal experience. To use the HDFS commands, first you need to start the Hadoop services using the following command: sbin/start-all.sh. Use the Thrift TCP protocol. Syntax of Hive -e command 1 hive -e <quoted-query-string> Example for Hive -e option 1 hive -e "select * from test_db.sample;" . Check if the HiveServer2 service is running and listening on port 10000 using netstat command. He is knowledgeable and experienced, and he enjoys sharing his knowledge with others. Find centralized, trusted content and collaborate around the technologies you use most. It's a JDBC client that is based on the SQLLine CLI. Turns out no socket is open for the process that hiveserver2 is running in. hive - Start mysql in CentOS7, Failed to start mysql.service: Unit not Start a Cloud Shell instance: Go to Cloud Shell In Cloud Shell, set the default Compute Engine zone to the zone where you are going to create your Dataproc clusters. Start the DataNode on New Node Datanode daemon should be started manually using $HADOOP_HOME/bin/hadoop-daemon.sh script. The results are not stored anywhere, but are displayed on the console. Is it a bug? Note that there may be differences in the runtime environment of Hadoop server nodes and the machine running the Hive client (because of different jvm versions or different software libraries). However, hives are not always present, so you should keep an eye on them and treat them if necessary. How do I connect these two faces together? To do so, right-click on the offline registry you want to edit > click New > Key. For example, we can use "derby" as db type. To store Metastore data, create a directory named data in the $DERBY_HOME directory. The consent submitted will only be used for data processing originating from this website. The default HMS heap memory below applies to Hadoop (Hive), Spark, and Presto clusters that are running Hive metastore version 2.3 or later. container.style.maxHeight = container.style.minHeight + 'px'; mc - file manager, similar to Norton Commander, but for Linux. Make sure the directory has the sticky bit set (chmod 1777 ). Hive Metastore Configuration - Hadoop Online Tutorials Edit: I displayed the grants for the user hive: And here i selected host user and Password of the mysql.user table. Audit logs were added in Hive 0.7for secure client connections(HIVE-1948) and in Hive 0.10 for non-secure connections (HIVE-3277; also see HIVE-2797). Acidity of alcohols and basicity of amines. var lo = new MutationObserver(window.ezaslEvent); Clouderas new Model Registry is available in Tech Preview to connect development and operations workflows, [ANNOUNCE] CDP Private Cloud Base 7.1.7 Service Pack 2 Released, [ANNOUNCE] CDP Private Cloud Data Services 1.5.0 Released. Basics. hive-site.xml. The start-database command can be used to start an instance of the Derby network server: start-database [--dbhost 0.0.0.0] [--dbport 1527] [--dbhome path /derby] The default value for the host is 0.0.0.0, which allows for Derby to listen on localhost as well as the IP/hostname interfaces. Let's explore the commands and update options. at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1073) selects all rows from partition ds=2008-08-15 of the invites table into an HDFS directory. Beeline client. creates a table called invites with two columns and a partition column called ds. process: Beeline is the CLI (command-line interface) developed specifically to interact with HiveServer2. It is not part of the data itself but is derived from the partition that a particular dataset is loaded into. Once you've completed editing the registry, unload the offline registry hive. Hive Relational | Arithmetic | Logical Operators. If the user so wishes, the maximum amount of memory for this child jvm can be controlled via the option hive.mapred.local.mem. Useful Hive OS and Linux Commands in Simple Words - Medium This includes adding frames and foundation, as well as installing a queen excluder. Hive shell - Tutorial By default, it will be (localhost:10000), so the address will look like jdbc:hive2://localhost:10000. (BoneCP.java:305) If 'LOCAL' is omitted then it looks for the file in HDFS. Go to Hive shell by giving the command sudo hive and enter the command 'create database' to create the new database in the Hive. Staging Ground Beta 1 Recap, and Reviewers needed for Beta 2, Hive Server 2 Hangs On Start / Won't Start, How to connect with Hiveserver2 using Python 3.4.5, hive reach max worker and cannot connect to hiveserver2, hiveserver2 org.apache.thrift.transport.TTransportException error when running 2nd query after minute of inactivity. To configure Derby, Hive must be informed of where the database is stored. $HIVE_HOME/bin/hive --service hiveserver2 & nohup hiveserver2 & nohup hive --service hiveserver2 & You will get the warnings which can be neglected. The location and the type of the RDBMS can be controlled by the two variables javax.jdo.option.ConnectionURL and javax.jdo.option.ConnectionDriverName. This doesn't log anything to STDOUTPUT but starts a process which is running, however I can't see any tcp sockets listening on the port 10000. 48 more. Step 1: Create a Database. To build an older version of Hive on Hadoop 0.20: If using Ant, we will refer to the directory "build/dist" as . GettingStarted - Apache Hive - Apache Software Foundation If Java is not currently installed in your system, please install it using the steps below. SeeUnderstanding Hive Branchesfor details. Start hive metastore service. As of 0.13, Hive is built using Apache Maven. Why do many companies reject expired SSL certificates as bugs in bug bounties? New node will be identified by script-based commands. . This can be accomplished by editing the hive-site.xml file in the $HIVE_HOME/conf directory. Follow the below steps to launch the hive Step 1: Start all your Hadoop Daemon start-dfs.sh # this will start namenode, datanode and secondary namenode start-yarn.sh # this will start node manager and resource manager jps # To check running daemons Step 2: Launch Hive hive Let's discuss the hive one-shot commands -e option/mode 'LOCAL' signifies that the input file is on the local file system. The most common combination in the corporate environment lately is Java using the Spring Framework for the server and React for the client. To build against Hadoop 1.x use the profile hadoop-1; for Hadoop 2.x use hadoop-2. REPLACE COLUMNS can also be used to drop columns from the table's schema: Metadata is in an embedded Derby database whose disk storage location is determined by the Hive configuration variable named javax.jdo.option.ConnectionURL. Also it will be good to check if the correct mysql is being used by metastore (like the local mysql or is it configured to use remote mysql) Reply 2,517 Views 0 Kudos warthi Explorer Created 07-27-2017 08:32 AM @Jay SenSharma Set up a Hive table to run Hive commands - Amazon EMR How to react to a students panic attack in an oral exam? Connect and share knowledge within a single location that is structured and easy to search. Categories: Beeline client | Hive | HiveServer2 | How To | Starting and Stopping | All Categories, United States: +1 888 789 1488 Hive by default gets its configuration from, The location of the Hive configuration directory can be changed by setting the, Configuration variables can be changed by (re-)defining them in. Setting hive.async.log.enabled to false will disable asynchronous logging and fallback to synchronous logging. export PROJECT=$ (gcloud info. 0. For example: The latter shows all the current settings. In the embedded mode, it runs an embedded Hive (similar to Hive Command line) whereas remote mode is for connecting to a separate HiveServer2 process over Thrift. Do I need a thermal expansion tank if I already have a pressure tank? I removed the directory /var/lib/mysql. Audit logs are logged from the Hive metastore server for every metastore API invocation. Method 3: Using Command Prompt (CMD) In the windows search menu > search the "cmd" > and click on the "CMD" app to open it: Once the CMD is opened, access the Postgres bin directory: cd \Program Files\PostgreSQL\ 15 \bin. By default HiveServer2 runs on port 10000, If you wanted to change the port, you can do it by changing the value for hive.server2.thrift.port property on $HIVE_HOME/conf/hive-site.xml file. No new updates from Will Hive have been released in quite some time, but this doesnt mean the game is not still in the works. The Hive DDL operationsare documented in Hive Data Definition Language. $HIVE_HOME/bin/hiveserver2. at com.jolbox.bonecp.BoneCP.obtainRawInternalConnection(BoneCP.java:254) Conclusion You have successfully installed and configured Hive on your Ubuntu system. The name of the log entry is "HiveMetaStore.audit". Mac is a commonly used development environment. javax.jdo.option.ConnectionPassword Starting with Hive 0.13.0, the default logging level is INFO. There are fewer games. at java.lang.reflect.Constructor.newInstance(Constructor.java:513) There is a lot of. Any branches with other names are feature branches for works-in-progress. Or to start Beeline and HiveServer2 in the same process for testing purpose, for a similar user experience to HiveCLI: To run the HCatalog server from the shell in Hive release 0.11.0 and later: To use the HCatalog command line interface (CLI) in Hive release 0.11.0 and later: For more information, see HCatalog Installation from Tarball and HCatalog CLI in the HCatalog manual. beeline is located at $HIVE_HOME/bin directory. Hive Command Line Options Usage Examples Execute query using hive command line options $ hive -e 'select * from test'; Execute query using hive command line options in silent mode $ hive -S -e 'select * from test' Dump data to the file in silent mode $hive -S -e 'select col from tab1' > a.txt Read: The difference between the phonemes /p/ and /b/ in Japanese. Go to Hive shell by giving the command sudo hive and enter the command 'create database<data base name>' to create the new database in the Hive. Let's connect to hiveserver2 now. . Refer to https://logging.apache.org/log4j/2.x/manual/async.htmlfor benefits and drawbacks. You can also start Hive server HS2 (HiveServer2) using hive --service command.if(typeof ez_ad_units != 'undefined'){ez_ad_units.push([[250,250],'sparkbyexamples_com-box-4','ezslot_12',139,'0','0'])};__ez_fad_position('div-gpt-ad-sparkbyexamples_com-box-4-0');if(typeof ez_ad_units != 'undefined'){ez_ad_units.push([[250,250],'sparkbyexamples_com-box-4','ezslot_13',139,'0','1'])};__ez_fad_position('div-gpt-ad-sparkbyexamples_com-box-4-0_1'); .box-4-multi-139{border:none !important;display:block !important;float:none !important;line-height:0px;margin-bottom:15px !important;margin-left:auto !important;margin-right:auto !important;margin-top:15px !important;max-width:100% !important;min-height:250px;min-width:250px;padding:0;text-align:center !important;}. Euler: A baby on his lap, a cat on his back thats how he wrote his immortal works (origin?). The first step is to find a suitable location for the hive. 2021 Cloudera, Inc. All rights reserved. HDFS is the primary or major component of the Hadoop ecosystem which is responsible for storing large data sets of structured or unstructured data across various nodes and thereby maintaining the metadata in the form of log files. From the 'Class Name' input box select the Hive driver for working with HiveServer2: org.apache.hive.jdbc.HiveDriver. Hadoop HDFS Operations and Commands with Examples We and our partners use data for Personalised ads and content, ad and content measurement, audience insights and product development. Start the Hive services using the command 'hive --service hiveserver2' Connect to the Hive services using a command line client such as 'beeline'. This can cause unexpected behavior/errors while running in local mode. try running : Starting and stopping a server from the command line - IBM In Windows 10, you can also open Settings ( WinKey + i ), click Update and Security, click Recovery, and click the Restart Now button under Advanced startup. Share. Hive is commonly used in production Linux and Windows environment. The HiveCLI (deprecated) and Beeline command 'SET' can be used to set any Hadoop (or Hive) configuration variable. $ bin/beeline --hiveconf x1=y1 --hiveconf x2=y2 //this sets client-side variables x1 and x2 to y1 and y2 respectively. What sort of strategies would a medieval military use against a fantasy giant? It is well-maintained. At present the best source for documentation on Beeline is the original SQLLine documentation. Hive client. It is a much larger community. HiveServer2 (introduced in Hive 0.11) has its own CLI called Beeline. Strings. Most peoples symptoms usually improve in a few hours, and a rash usually disappears without a trace within a few days. Hosting plans available from this provider include shared hosting, virtual private servers (VPS), dedicated servers, and colocation. Beeline is the command line interface with Hive. Hive configuration can be manipulated by: Editing hive-site.xml and defining any desired variables (including Hadoop variables) in it. Karmasphere (http://karmasphere.com ) (commercial product). Planning a New Cloudera Enterprise Deployment, Step 1: Run the Cloudera Manager Installer, Migrating Embedded PostgreSQL Database to External PostgreSQL Database, Storage Space Planning for Cloudera Manager, Manually Install Cloudera Software Packages, Creating a CDH Cluster Using a Cloudera Manager Template, Step 5: Set up the Cloudera Manager Database, Installing Cloudera Navigator Key Trustee Server, Installing Navigator HSM KMS Backed by Thales HSM, Installing Navigator HSM KMS Backed by Luna HSM, Uninstalling a CDH Component From a Single Host, Starting, Stopping, and Restarting the Cloudera Manager Server, Configuring Cloudera Manager Server Ports, Moving the Cloudera Manager Server to a New Host, Migrating from PostgreSQL Database Server to MySQL/Oracle Database Server, Starting, Stopping, and Restarting Cloudera Manager Agents, Sending Usage and Diagnostic Data to Cloudera, Exporting and Importing Cloudera Manager Configuration, Modifying Configuration Properties Using Cloudera Manager, Viewing and Reverting Configuration Changes, Cloudera Manager Configuration Properties Reference, Starting, Stopping, Refreshing, and Restarting a Cluster, Virtual Private Clusters and Cloudera SDX, Compatibility Considerations for Virtual Private Clusters, Tutorial: Using Impala, Hive and Hue with Virtual Private Clusters, Networking Considerations for Virtual Private Clusters, Backing Up and Restoring NameNode Metadata, Configuring Storage Directories for DataNodes, Configuring Storage Balancing for DataNodes, Preventing Inadvertent Deletion of Directories, Configuring Centralized Cache Management in HDFS, Configuring Heterogeneous Storage in HDFS, Enabling Hue Applications Using Cloudera Manager, Post-Installation Configuration for Impala, Configuring Services to Use the GPL Extras Parcel, Tuning and Troubleshooting Host Decommissioning, Comparing Configurations for a Service Between Clusters, Starting, Stopping, and Restarting Services, Introduction to Cloudera Manager Monitoring, Viewing Charts for Cluster, Service, Role, and Host Instances, Viewing and Filtering MapReduce Activities, Viewing the Jobs in a Pig, Oozie, or Hive Activity, Viewing Activity Details in a Report Format, Viewing the Distribution of Task Attempts, Downloading HDFS Directory Access Permission Reports, Troubleshooting Cluster Configuration and Operation, Authentication Server Load Balancer Health Tests, Impala Llama ApplicationMaster Health Tests, Navigator Luna KMS Metastore Health Tests, Navigator Thales KMS Metastore Health Tests, Authentication Server Load Balancer Metrics, HBase RegionServer Replication Peer Metrics, Navigator HSM KMS backed by SafeNet Luna HSM Metrics, Navigator HSM KMS backed by Thales HSM Metrics, Choosing and Configuring Data Compression, YARN (MRv2) and MapReduce (MRv1) Schedulers, Enabling and Disabling Fair Scheduler Preemption, Creating a Custom Cluster Utilization Report, Configuring Other CDH Components to Use HDFS HA, Administering an HDFS High Availability Cluster, Changing a Nameservice Name for Highly Available HDFS Using Cloudera Manager, MapReduce (MRv1) and YARN (MRv2) High Availability, YARN (MRv2) ResourceManager High Availability, Work Preserving Recovery for YARN Components, MapReduce (MRv1) JobTracker High Availability, Cloudera Navigator Key Trustee Server High Availability, Enabling Key Trustee KMS High Availability, Enabling Navigator HSM KMS High Availability, High Availability for Other CDH Components, Navigator Data Management in a High Availability Environment, Configuring Cloudera Manager for High Availability With a Load Balancer, Introduction to Cloudera Manager Deployment Architecture, Prerequisites for Setting up Cloudera Manager High Availability, High-Level Steps to Configure Cloudera Manager High Availability, Step 1: Setting Up Hosts and the Load Balancer, Step 2: Installing and Configuring Cloudera Manager Server for High Availability, Step 3: Installing and Configuring Cloudera Management Service for High Availability, Step 4: Automating Failover with Corosync and Pacemaker, TLS and Kerberos Configuration for Cloudera Manager High Availability, Port Requirements for Backup and Disaster Recovery, Monitoring the Performance of HDFS Replications, Monitoring the Performance of Hive/Impala Replications, Enabling Replication Between Clusters with Kerberos Authentication, How To Back Up and Restore Apache Hive Data Using Cloudera Enterprise BDR, How To Back Up and Restore HDFS Data Using Cloudera Enterprise BDR, Migrating Data between Clusters Using distcp, Copying Data between a Secure and an Insecure Cluster using DistCp and WebHDFS, Using S3 Credentials with YARN, MapReduce, or Spark, How to Configure a MapReduce Job to Access S3 with an HDFS Credstore, Importing Data into Amazon S3 Using Sqoop, Configuring ADLS Access Using Cloudera Manager, Importing Data into Microsoft Azure Data Lake Store Using Sqoop, Configuring Google Cloud Storage Connectivity, How To Create a Multitenant Enterprise Data Hub, Configuring Authentication in Cloudera Manager, Configuring External Authentication and Authorization for Cloudera Manager, Step 2: Install JCE Policy Files for AES-256 Encryption, Step 3: Create the Kerberos Principal for Cloudera Manager Server, Step 4: Enabling Kerberos Using the Wizard, Step 6: Get or Create a Kerberos Principal for Each User Account, Step 7: Prepare the Cluster for Each User, Step 8: Verify that Kerberos Security is Working, Step 9: (Optional) Enable Authentication for HTTP Web Consoles for Hadoop Roles, Kerberos Authentication for Non-Default Users, Managing Kerberos Credentials Using Cloudera Manager, Using a Custom Kerberos Keytab Retrieval Script, Using Auth-to-Local Rules to Isolate Cluster Users, Configuring Authentication for Cloudera Navigator, Cloudera Navigator and External Authentication, Configuring Cloudera Navigator for Active Directory, Configuring Groups for Cloudera Navigator, Configuring Authentication for Other Components, Configuring Kerberos for Flume Thrift Source and Sink Using Cloudera Manager, Using Substitution Variables with Flume for Kerberos Artifacts, Configuring Kerberos Authentication for HBase, Configuring the HBase Client TGT Renewal Period, Using Hive to Run Queries on a Secure HBase Server, Enable Hue to Use Kerberos for Authentication, Enabling Kerberos Authentication for Impala, Using Multiple Authentication Methods with Impala, Configuring Impala Delegation for Hue and BI Tools, Configuring a Dedicated MIT KDC for Cross-Realm Trust, Integrating MIT Kerberos and Active Directory, Hadoop Users (user:group) and Kerberos Principals, Mapping Kerberos Principals to Short Names, Configuring TLS Encryption for Cloudera Manager and CDH Using Auto-TLS, Manually Configuring TLS Encryption for Cloudera Manager, Manually Configuring TLS Encryption on the Agent Listening Port, Manually Configuring TLS/SSL Encryption for CDH Services, Configuring TLS/SSL for HDFS, YARN and MapReduce, Configuring Encrypted Communication Between HiveServer2 and Client Drivers, Configuring TLS/SSL for Navigator Audit Server, Configuring TLS/SSL for Navigator Metadata Server, Configuring TLS/SSL for Kafka (Navigator Event Broker), Configuring Encrypted Transport for HBase, Data at Rest Encryption Reference Architecture, Resource Planning for Data at Rest Encryption, Optimizing Performance for HDFS Transparent Encryption, Enabling HDFS Encryption Using the Wizard, Configuring the Key Management Server (KMS), Configuring KMS Access Control Lists (ACLs), Migrating from a Key Trustee KMS to an HSM KMS, Migrating Keys from a Java KeyStore to Cloudera Navigator Key Trustee Server, Migrating a Key Trustee KMS Server Role Instance to a New Host, Configuring CDH Services for HDFS Encryption, Backing Up and Restoring Key Trustee Server and Clients, Initializing Standalone Key Trustee Server, Configuring a Mail Transfer Agent for Key Trustee Server, Verifying Cloudera Navigator Key Trustee Server Operations, Managing Key Trustee Server Organizations, HSM-Specific Setup for Cloudera Navigator Key HSM, Integrating Key HSM with Key Trustee Server, Registering Cloudera Navigator Encrypt with Key Trustee Server, Preparing for Encryption Using Cloudera Navigator Encrypt, Encrypting and Decrypting Data Using Cloudera Navigator Encrypt, Converting from Device Names to UUIDs for Encrypted Devices, Configuring Encrypted On-disk File Channels for Flume, Installation Considerations for Impala Security, Add Root and Intermediate CAs to Truststore for TLS/SSL, Authenticate Kerberos Principals Using Java, Configure Antivirus Software on CDH Hosts, Configure Browser-based Interfaces to Require Authentication (SPNEGO), Configure Browsers for Kerberos Authentication (SPNEGO), Configure Cluster to Use Kerberos Authentication, Convert DER, JKS, PEM Files for TLS/SSL Artifacts, Obtain and Deploy Keys and Certificates for TLS/SSL, Set Up a Gateway Host to Restrict Access to the Cluster, Set Up Access to Cloudera EDH or Altus Director (Microsoft Azure Marketplace), Using Audit Events to Understand Cluster Activity, Configuring Cloudera Navigator to work with Hue HA, Cloudera Navigator support for Virtual Private Clusters, Encryption (TLS/SSL) and Cloudera Navigator, Limiting Sensitive Data in Navigator Logs, Preventing Concurrent Logins from the Same User, Enabling Audit and Log Collection for Services, Monitoring Navigator Audit Service Health, Configuring the Server for Policy Messages, Using Cloudera Navigator with Altus Clusters, Configuring Extraction for Altus Clusters on AWS, Applying Metadata to HDFS and Hive Entities using the API, Using the Purge APIs for Metadata Maintenance Tasks, Troubleshooting Navigator Data Management, Files Installed by the Flume RPM and Debian Packages, Configuring the Storage Policy for the Write-Ahead Log (WAL), Using the HBCK2 Tool to Remediate HBase Clusters, Exposing HBase Metrics to a Ganglia Server, Configuration Change on Hosts Used with HCatalog, Accessing Table Information with the HCatalog Command-line API, Unable to connect to database with provided credential, Unknown Attribute Name exception while enabling SAML, Downloading query results from Hue takes long time, 502 Proxy Error while accessing Hue from the Load Balancer, Hue Load Balancer does not start after enabling TLS, Unable to kill Hive queries from Job Browser, Unable to connect Oracle database to Hue using SCAN, Increasing the maximum number of processes for Oracle database, Unable to authenticate to Hbase when using Hue, ARRAY Complex Type (CDH 5.5 or higher only), MAP Complex Type (CDH 5.5 or higher only), STRUCT Complex Type (CDH 5.5 or higher only), VARIANCE, VARIANCE_SAMP, VARIANCE_POP, VAR_SAMP, VAR_POP, Configuring Resource Pools and Admission Control, Managing Topics across Multiple Kafka Clusters, Setting up an End-to-End Data Streaming Pipeline, Kafka Security Hardening with Zookeeper ACLs, Configuring an External Database for Oozie, Configuring Oozie to Enable MapReduce Jobs To Read/Write from Amazon S3, Configuring Oozie to Enable MapReduce Jobs To Read/Write from Microsoft Azure (ADLS), Starting, Stopping, and Accessing the Oozie Server, Adding the Oozie Service Using Cloudera Manager, Configuring Oozie Data Purge Settings Using Cloudera Manager, Dumping and Loading an Oozie Database Using Cloudera Manager, Adding Schema to Oozie Using Cloudera Manager, Enabling the Oozie Web Console on Managed Clusters, Scheduling in Oozie Using Cron-like Syntax, Installing Apache Phoenix using Cloudera Manager, Using Apache Phoenix to Store and Access Data, Orchestrating SQL and APIs with Apache Phoenix, Creating and Using User-Defined Functions (UDFs) in Phoenix, Mapping Phoenix Schemas to HBase Namespaces, Associating Tables of a Schema to a Namespace, Understanding Apache Phoenix-Spark Connector, Understanding Apache Phoenix-Hive Connector, Using MapReduce Batch Indexing to Index Sample Tweets, Near Real Time (NRT) Indexing Tweets Using Flume, Using Search through a Proxy for High Availability, Enable Kerberos Authentication in Cloudera Search, Flume MorphlineSolrSink Configuration Options, Flume MorphlineInterceptor Configuration Options, Flume Solr UUIDInterceptor Configuration Options, Flume Solr BlobHandler Configuration Options, Flume Solr BlobDeserializer Configuration Options, Solr Query Returns no Documents when Executed with a Non-Privileged User, Installing and Upgrading the Sentry Service, Configuring Sentry Authorization for Cloudera Search, Synchronizing HDFS ACLs and Sentry Permissions, Authorization Privilege Model for Hive and Impala, Authorization Privilege Model for Cloudera Search, Frequently Asked Questions about Apache Spark in CDH, Developing and Running a Spark WordCount Application, Accessing Data Stored in Amazon S3 through Spark, Accessing Data Stored in Azure Data Lake Store (ADLS) through Spark, Accessing Avro Data Files From Spark SQL Applications, Accessing Parquet Files From Spark SQL Applications, Building and Running a Crunch Application with Spark.