Software Developer Intern
- Designed and developed scalable data pipelines using SQL for data extraction, transformation, and loading (ETL) across structured and semi-structured sources.
- Automated and orchestrated data workflows on Azure Databricks, leveraging PySpark and Delta Lake for optimized big data processing.
- Administered and optimized Linux-based environments for deployment and job scheduling.
- Containerized applications and streamlined deployment using Docker, improving CI/CD workflows.
- Collaborated with cross-functional teams to build robust data models and improve data accessibility for analytics.