Batch Processing, Parallel Processing
- Batch Processing: allows users to submit series of programs (jobs) and they will be executed to completion without further user input and manual intervention. It popular usage to separate large work process to sub small jobs such as: build report, big data processing. And the batch jobs can parallel running with distributed computing and implementation at architecture.
- Parallel Processing: is the processing of program instructions by dividing them among multiple processors to use the CPUs power of computer with the objective of running a program in less time and implement at programming.
Batch and parallel are popular solutions to use power of computer to improve performance of program. So, according to requirement and applicable of solution that we can choose the fit solution. Batch processing stronger than parallel processing at distributed computing. Parallel processing optimal on single machine and batch processing usage for larger workload with distributed computing.
You can going to Apache Spark, Hadoop or AWS Map Reduce to learning anymore about batch processing and optimization. With CI/CD, the runner machines can hosting with multiple mechanism belong specific jobs. I like to use Gitlab CI with custom runners. I can host specific jobs to specific machines to separate major of workload. More information about Gitlab Runner auto scaling with AWS Fargate
Thanks for reading,
Please give me comments if you have any ideas and suggestion. I hope to learning more from you.
– Microservice architecture document: https://microservices.io/
– AWS Microservice deployment with Fargate: https://d1.awsstatic.com/whitepapers/microservices-on-aws.pdf