Container (or deployment) orchestration is the automation of taking your application, and placing it on infrastructure (typically a machine or VM) to execute as part of your larger application, while ensuring that the workload continues to run through a variety of potential issues. Scheduling, a subset of orchestration, determines which node (or nodes) in your infrastructure the application should run on. Controlling this scheduling is often necessary in order to ensure that scheduled applications are capable of running.
There are two main approaches I’ve seen utilized: Task-based Scheduliing, and Capability-based Scheduling.
Task-based scheduling is the practice of annotating nodes for a given application, team, or purpose, and ensuring that applications are only scheduled on their own nodes. The underlying infrastructure has the most control over what is scheduled where.
In Kubernetes, this leads to a very
taint heavy workflow, all nodes will have a set of taints, and reject any application which does not explicitly have a
toleration. In this model, the deployment is telling the scheduler which set of resources the workload should be scheduled on.
Task-based scheduling is better at ensuring node-level isolation between tasks (or teams), and closely maps to non-orchestrated autoscaling groups in non-containerized environments. Ie, it is a common first step after a “lift and shift”-style migration.
It is significantly easier to end up with non-schedulable workloads, and requires a tighter infrastructure-application release coordination than capability-based scheduling.
Capability (attribute) Based
Capability (or attribute) based scheduling is the practice of annotation the infrastructure nodes with the functionality that the node can provide. In Kubernetes, this leads to a
label heavy workflow, where all nodes will be labeled with functionality or limitations that they possess. For example, a set of nodes may have a
preemptible label if the node is a spot/preemptible instance.
Capability based scheduling gives significantly more power and flexibilty to the scheduler, allowing workloads to be scheduled based on the application preferences, with more generically-defined infrastructure. In most cases, there can be a hard separation between the application and infrastructure definitions, without risking unschedulable workloads.
This approach also provides for a concept of preference in scheduling, that a task-based model cannot. It is reasonable for a workload to prefer to be scheduled on either a preemptible (or non-preemptible), but be okay with another configuration based on infrastructure availability. This preference-based scheduling is not practical when infrastructure nodes specify the workloads that can run on them.
In the end, I strongly prefer capability-based scheduling over the task-based alternative. Even in situations where some level of resource isolation is required, this can be accomplished using appropriate resource requests and workload priority policies.
Task-based infrastructure is taking control away from the scheduler, which is rather good at scheduling workloads efficiently, in order to introduce statically-defined scheduling behaviour. You miss out on scheduling improvements and features in exchange for some up-front clarity and predictability.
In the event you find yourself in a position where you need more granular control than the default scheduler in your orchestration system provides, I would recommend using a custom scheduler, rather than statically defining scheduling behaviour at the infrastructure level.
 This assumes you are not depending on capabilities that the underlying infrastructure can not provide.