ROLE OVERVIEW
This role is responsible for supporting day-to-day data lake
operations including pipelines monitoring and troubleshooting, data extraction
and validation, process automation for
operational excellence. The role involves close collaboration with
cross-functional teams, and adherence to service-level agreements (SLAs) to
support timely data-driven operations and pipelines executions. This role
ensures accurate and reliable credit data by designing efficient processes,
resolving inconsistencies, and maintaining high data quality to support robust
credit scoring and informed business decisions
KEY RESPONSIBILITIES
Technical & Data
Operations
·
Perform adhoc
and regular bulk data extraction based on business requirements and in line
with SLAs. Such extraction may include both internal and external sources.
·
Support and optimise Extract, Transform, Load
(ETL) processes.
·
Write Python scripts to automate routine data
operation and data validation tasks.
·
Handle data pipeline issues such as failures, slipped
SLAs, incident escalations.
·
Assist data engineering team and other users to
deploy new pipelines or services for production.
Process Management
& SLA Adherence
·
Establish control and monitoring points to maintain runtime stability, data integrity,
data security, and adhere to committed SLAs on daily basis.
·
Identify and troubleshoot data quality issues
proactively.
·
Perform data lake capacity planning and
utilization tracking.
·
Maintain change controls with clear audit trails.
·
Collaborate with cross-functional teams to gather
data requirements and ensure timely delivery.
Documentation & Knowledge Management
·
Document data workflows, automation scripts, and
issue resolutions for future reference and process continuity.
·
Maintain version controls and tracking for all
pipelines.
WHAT DOES IT TAKE TO BE SUCCESSFUL
Qualifications
·
Bachelor’s Degree in Computer Science, Information
Technology, or a related field.
Work
Experience
·
1–2 years of experience in a data engineering, data operations,
database management, or similar role.
Skills
& Competencies
·
Strong knowledge in SQL, NoSQL, Python, PySpark, DAX
scripting, etc.
·
Experience with data extraction from databases, APIs, or
flat files.
·
Familiarity with automation tools such as Airflow, cron
jobs, or custom Python scripts.
·
Ability to troubleshoot data issues.
·
Ability to follow and manage SLAs for task completion
and data delivery.
Attributes
·
Strong attention to detail and commitment to data
accuracy.
·
Problem-solving mindset and ability to work
independently.
·
Excellent communication and time-management skills.
·
Ability to work in a fast-paced, SLA-driven environment.