Implementing Hadoop Security

Security is an essential part of Hadoop infrastructure in any organization. Let’s look at some of the key or need-to-have components in ensuring that infrastructure is secured from external compromises.

The key aspects of security are authentication, authorization and encryption. We will look at ways to implement all three in the content of Hadoop clusters – be it on the cloud or on-prem.

hadoopServices

Network Level Security (Apache Knox):

Apache Knox is used to secure the perimeter of hadoop clusters to access data and to execute jobs. Knox can be deployed as clusters of Knox nodes, which acts as single access point and routes requests to the Hadoop rest and HTTP APIs and Provides SSO (single sign on) for multiple UIs. Knox supports LDAP, Active Directory as well as kerberos authentication.

The most prevalent and popular way to provide secure authentication to Hadoop clusters is by the use of Kerberos, which requires client-side configuration and packages. Apache Knox eliminates the requirement for such client-side library and complex configurations.

We can create different topologies, where we can provide for actual hosts and ports to run service components by integrating LDAP/Kerberos authentication.

Example to access HDFS data:

Make directory:
curl -ik -u knox_username -X PUT ‘https://knoxhost:8443/gateway/topology_name/webhdfs/v1/user/hdfs/input?op=MKDIRS’

HTTP/1.1 200 OK

Date: Fri, 01 Sep 2017 09:10:41 GMT

Set-Cookie: JSESSIONID=k9klsdy2yyeg1engj31y5djh8;Path=/gateway/test;Secure;HttpOnly
Expires: Thu, 01 Jan 1970 00:00:00 GMT

Set-Cookie: rememberMe=deleteMe; Path=/gateway/test; Max-Age=0; Expires=Thu, 31-Aug-2017 09:10:41 GMT

Cache-Control: no-cache

Expires: Fri, 01 Sep 2017 09:10:41 GMT

Date: Fri, 01 Sep 2017 09:10:41 GMT

Pragma: no-cache

Expires: Fri, 01 Sep 2017 09:10:41 GMT

Date: Fri, 01 Sep 2017 09:10:41 GMT

Pragma: no-cache

Content-Type: application/json; charset=UTF-8

X-FRAME-OPTIONS: SAMEORIGIN

Server: Jetty(6.1.26.hwx)

Content-Length: 16

Above command will create input directory in /user/hdfs location

Example to access hive table using beeline
beeline> !connect
jdbc:hive2://knoxhost:8443/;ssl=true;sslTrustStore=/opt/jdk1.8.0_144/jre/lib/security/cacerts;trustStorePassword=changeit?hive.server2.transport.mode=http;hive.server2.thrift.http.path=gateway/test/hive

Connecting to jdbc:hive2://knoxhost:8443/;ssl=true;sslTrustStore=/opt/jdk1.8.0_144/jre/lib/security/cacerts;trustStorePassword=changeit?hive.server2.transport.mode=http;hive.server2.thrift.http.path=gateway/test/hive

Enter username for jdbc:hive2://knoxhost:8443/;ssl=true;sslTrustStore=/opt/jdk1.8.0_144/jre/lib/security/cacerts;trustStorePassword=changeit?hive.server2.transport.mode=http;hive.server2.thrift.http.path=gateway/test/hive: knox_user

Enter password for jdbc:hive2://knoxhost:8443/;ssl=true;sslTrustStore=/opt/jdk1.8.0_144/jre/lib/security/cacerts;trustStorePassword=changeit?hive.server2.transport.mode=http;hive.server2.thrift.http.path=gateway/test/hive: **********

log4j:WARN No appenders could be found for logger (org.apache.hive.jdbc.Utils).

log4j:WARN Please initialize the log4j system properly.

log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.

Connected to: Apache Hive (version 1.2.1000.2.6.2.0-205)

Driver: Hive JDBC (version 1.2.1.spark2)

Transaction isolation: TRANSACTION_REPEATABLE_READ

0: jdbc:hive2://knoxhost:8443/> show databases;

+———————–+–+
| database_name |
+———————–+–+
| default |
+———————–+–+
1 row selected (4.169 seconds)

0: jdbc:hive2://knoxhost:8443/> create database test;

No rows affected (1.271 seconds)

0: jdbc:hive2://knoxhost:8443/> show databases;

+———————–+–+
| database_name |
+———————–+–+
| default |
| test |
+———————–+–+
2 rows selected (2.222 seconds)

Hortonworks knox tutorial can be accessed here.

Authentication (Kerberos):

The primary purpose of an Hadoop cluster is to store and process large amount of data, which requires secure handling to prevent unauthorised access. Kerberos network authentication protocol provides for strong authentication of client/server applications. For each operation, the client is required to provide its identity(principal) to the Kerberos server. There are two types of principals – user and service principals.

Another import term in Kerberos is Realm. Realm is the authentication and administrative domain and all principals are assigned to a specific Kerberos realm.

Key Distribution Centre is used to store and controls all Kerberos principals and Realm.

kerbos

    KDC (Key Distribution Centre) has three components

  • Kerberos Databases,
  • Authentication Server(AS),
  • Ticket Granting Service(TGS)

Kerberos Database stores and controls all principals and realms. Kerberos principals in the database are identities with following naming convention.

User@EXAMPLE.COM (User Principal)

Hdfs/node23.example.com@EXAMPLE.COM (Service Principal)

AS is responsible for issuing TGT (Ticket Granting Ticket) service tickets when client initiate request to AS.

TGS is responsible for validating TGT service tickets. Service tickets allows an authenticated principal to use services provided by the application server, which is identified by service principal.

To create Principal

as root user,

kadmin.local -q “addprinc -pw orzota hdfs-user”

the above command will add new hdfs-user with orzota as password.

To access hdfs data in kerberized client machine,

$ kinit

Password for hdfs-user@ORZOTAADMIN.COM:

$ klist

Ticket cache: FILE:/tmp/krb5cc_1013

Default principal: hdfs-user@ORZOTAADMINS.COM

Valid starting Expires Service principal
09/14/2016 14:54:32 09/15/2016 14:54:32 krbtgt/ORZOTAADMIN.COM:

Authorization (Apache Sentry / Ranger)

In Hadoop infrastructure, Apache Sentry or Ranger can be used to perform the centralized way to manage security across various components in a Hadoop cluster.. In this blog, we will consider Ranger for authorization.

Ranger is used authorize users/group (as well as authenticated user by Kerberos) to access resources inside Hadoop ecosystem.

Currently Ranger provides audits and plugins for each of the Hadoop services which include HDFS, Hive, HBase, YARN, Kafka, Storm,Knox and Solr. Ranger uses Solr to audit the user actions on all supported services.

By using these plugins, Hadoop Administrator can create policies to authorize users to access Hadoop services.

For Example, Hive-Ranger-Plugin provides authorization at database, table and column level. By using this we can create specific / role-based policies for each user/group, thereby controlling the kind of queries that can be run on the database / table.

Hortonworks Ranger Tutorial can be access here.

Encryption (Ranger KMS):

Ranger Key Management Server (KMS) is built on the Hadoop KMS developed by the Apache community. It extends the native Hadoop KMS functions by letting the Ranger Admins store keys in a secure database.

Ranger provides centralized administration of Key management using Ranger admin UI. Ranger admin provides ability to create,delete and update keys using its dashboard or rest APIs. Ranger admin also provides the ability to manage access control policies within Ranger KMS. The access policies control permissions to generate or manage keys, adding another layer of security for data encrypted in Hadoop.

HDFS Encryption Example:

In Ranger KMS UI, Create key in the name of hdfs-encryption.

Add new policy in the name of key-test and give decrypt permission only for bob user.
in hdfs,

1. create test dir and give owner permission to the bob user.

hdfs dfs -mkdir /test

hdfs dfs -chown -R bob:hdfs /test

2. create encryption zone:

[hdfs@ip-172-31-4-145 ~]$ hdfs crypto -createZone -keyName hdfs-encryption -path /test

Added encryption zone /test

[hdfs@ip-172-31-4-145 ~]$ hdfs crypto -listZones

/test hdfs-encryption

3. Verify read write permission for user bob. only bob user can access the data from /test

4. if you try to access data from some other user. it will through following error.

[hdfs@ip-172-31-4-145 ~]$ hdfs dfs -put test1.txt /test/
put: User:alice not allowed to do ‘DECRYPT_EEK’ on ‘hdfs-encryption’
17/08/17 10:51:02 ERROR hdfs.DFSClient: Failed to close inode 17051

org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): No lease on /test/test1.txt._COPYING_ (inode 17051): File does not exist. Holder DFSClient_NONMAPREDUCE_1683412138_1 does not have any open files.

apss_content_flag:
0
vidbg_metabox_field_mp4_id:
0
vidbg_metabox_field_webm_id:
0
vidbg_metabox_field_poster_id:
0
vidbg_metabox_field_overlay:
off
vidbg_metabox_field_overlay_color:
#000
vidbg_metabox_field_overlay_alpha:
30
vidbg_metabox_field_no_loop:
off
vidbg_metabox_field_unmute:
off

1 Comment

  1. Pingback: Orzota - Faster Big Data Insights