Monthly Archives: August 2023

MoSCoW Prioritization: History, Overview, and Critical Analysis

Abstract

MoSCoW prioritization stands as a seminal framework for categorizing the importance and urgency of tasks and features in various project management and development settings. This paper delves into the origins, conceptual framework, and applications of the MoSCoW method. Furthermore, a critical analysis is undertaken to explore the strengths, limitations, and challenges inherent to this methodology.

Continue reading

RACI: History, Overview, and Critical Analysis

Abstract

The RACI (Responsible, Accountable, Consulted, Informed) matrix, a framework for defining roles and responsibilities in organizational contexts, has been widely adopted across diverse industries. This paper offers an in-depth exploration of the conceptual origins of RACI, its application across various organizational paradigms, and its impact on project management and organizational culture. It also critically examines the limitations and challenges inherent to its implementation, while suggesting possible extensions and improvements to make it more effective in modern organizational ecosystems.

Continue reading

Basic Guide to calculating the number of shards needed for an Amazon Kinesis stream

Here’s a basic guide to help you calculate the number of shards needed for an Amazon Kinesis stream.

Step 1: Understand Shards Shards are the fundamental units of throughput in a Kinesis stream. Each shard can support a certain amount of data read and write throughput. To determine the number of shards needed, you’ll need to consider your data volume and your desired throughput.

Step 2: Estimate Data Volume

  1. Start by estimating the amount of data you expect to produce or consume per second. This can be in terms of data size (e.g., megabytes) or records per second.
  2. Consider the peak times when your data production or consumption will be at its highest. This will help you estimate the maximum throughput required.

Step 3: Calculate Shards

  1. Calculate the write capacity required: Divide your estimated data volume per second by the maximum data volume that a shard can handle (1 MB/s for writes).

    Write Capacity = Estimated Data Volume (MB/s) / 1 MB/s per Shard
  2. Calculate the read capacity required: Divide your estimated data volume per second by the maximum data volume that a shard can handle (2 MB/s for reads).

    Read Capacity = Estimated Data Volume (MB/s) / 2 MB/s per Shard
  3. Determine the required number of shards: The number of shards needed is the maximum of the write and read capacities calculated

    Number of Shards = Max(Write Capacity, Read Capacity)

Step 4: Adjust for Scalability and Redundancy Keep in mind that the number of shards you initially calculate should provide enough capacity for current and future needs. Additionally, consider adding some extra shards to handle unexpected spikes in traffic and to ensure redundancy in case of shard failures.

Step 5: Consider Kinesis Data Streams Limits Be aware of AWS limits for the maximum number of shards you can have in a single stream. As of my last update in September 2021, the limit is 500 shards per stream.

Step 6: Monitor and Scale Regularly monitor your stream’s performance using AWS CloudWatch metrics. If you notice that you’re hitting shard limits or experiencing latency issues, you might need to adjust the number of shards by scaling up or down.

Tips:

  • If your data volume is unpredictable, you might want to consider using AWS Auto Scaling to dynamically adjust the number of shards based on the incoming data rate.
  • If you’re using Kinesis Data Streams for real-time analytics, make sure your shard count aligns with your desired processing speed and capacity.

Remember that shard calculations can be complex and may vary based on factors like data size, distribution, and your specific use case. Be prepared to iterate and adjust the number of shards as your application evolves and your understanding of its needs deepens.

Basic Guide to Configuring the Video Producer to connect to your Amazon Kinesis Video Stream

Configuring the video producer to connect to your Amazon Kinesis Video Stream involves a few steps that ensure secure and reliable data transmission. Here’s a basic guide:

Configure the Video Producer with Stream Name and Credentials

  1. Access Keys or IAM Roles: To connect to your Kinesis Video Stream, the video producer needs the appropriate credentials. These credentials can be provided through AWS access keys (Access Key ID and Secret Access Key) or, for better security, by utilizing AWS Identity and Access Management (IAM) roles. IAM roles provide temporary security credentials to entities (like applications or services) instead of using permanent access keys.When using IAM roles, you create a role and attach it to the video producer (e.g., an EC2 instance, an IoT device, or your application). The IAM role defines the permissions the producer has, ensuring least privilege access.
  2. Stream Name: The video producer needs to know the name of the Kinesis Video Stream it should send data to. This stream name acts as the destination where the video data will be ingested.
  3. AWS SDKs and Libraries: Amazon provides official SDKs and libraries for different programming languages that simplify the process of interacting with Kinesis Video Streams. These SDKs offer functions and methods to handle tasks like initializing the connection, encoding video data, and sending it to the stream.
  4. Encoding and Packaging: Video data needs to be properly encoded and packaged before being sent to the stream. The exact encoding and packaging requirements will depend on the SDK you’re using and the type of data you’re transmitting. Make sure to follow the guidelines provided by Amazon for packaging video frames efficiently.
  5. API Calls and Endpoints: Behind the scenes, the video producer SDK interacts with the Kinesis Video Streams API. This API is responsible for handling the communication between your producer and the Kinesis service. The SDK abstracts the API calls, allowing you to focus on sending your video data rather than managing the low-level API interactions.
  6. Token Management (Optional): For enhanced security, you might use temporary security tokens for authentication instead of long-lived access keys. These tokens can be obtained using various methods, such as AWS Security Token Service (STS) and web identity federation. This approach reduces the risk of exposing permanent credentials.
  7. Error Handling and Retries: Since network and service issues can occur, it’s important to implement error handling and retries in your producer application. The SDKs often provide built-in mechanisms for handling errors and resending data when transient failures happen.
  8. Throttling and Rate Limiting: AWS services, including Kinesis, impose rate limits to ensure fair usage and to prevent abuse. Your producer should be designed to handle throttling by implementing back-off strategies or other mechanisms that allow it to slow down when rate limits are reached.

In summary, configuring the video producer involves setting up the necessary credentials (access keys or IAM roles), specifying the stream name as the target destination, and utilizing AWS SDKs to handle the complexities of data encoding, packaging, and secure transmission. Properly configuring your video producer ensures that your video data is securely and efficiently transmitted to your Amazon Kinesis Video Stream for processing and analysis.

Key Criticisms of the Server Side Public License (SSPL)

Introduction

The Server Side Public License (SSPL) was introduced by MongoDB, Inc. in 2018 as a way to address concerns about cloud providers profiting from open-source projects without contributing back to them. The SSPL has generated controversy and faced several criticisms:

  1. Not Officially Open Source: The SSPL hasn’t been recognized by the Open Source Initiative (OSI) as an open-source license. This means that software under SSPL does not meet the OSI’s Open Source Definition. One of the fundamental principles of open-source licensing as defined by the OSI is the freedom to use the software for any purpose without restriction.
  2. Too Restrictive: One of the fundamental tenets of open source is the freedom to use, modify, and distribute software. The SSPL imposes restrictions on providing the software as a service, which some argue goes against the spirit of open source.
  3. Vague Language: Critics have pointed out that the language used in the SSPL is somewhat ambiguous. Specifically, the definition of what constitutes a “service” can be open to interpretation, potentially leading to legal gray areas.
  4. Business Concerns: Some businesses are wary of using or contributing to SSPL-licensed software because they fear it could affect their ability to offer services in the future or because they believe it might lead to licensing complications.
  5. Fragmentation of the Open Source Ecosystem: Introducing new licenses, especially controversial ones, can fragment the community. Having many different licenses with slightly different terms can be confusing and counterproductive.
  6. Reaction from Cloud Providers: Major cloud providers, like Amazon Web Services (AWS), responded to the SSPL by creating alternative versions of the software (e.g., Amazon’s DocumentDB as an alternative to MongoDB) to avoid the SSPL’s restrictions.
  7. Licensing Chain: There are concerns about how the SSPL’s terms might affect other software that interacts with SSPL-licensed software. The SSPL requires that any software that’s offered as a service in conjunction with the SSPL software must also be open-sourced, which can have implications for software integration and composition.

Conclusion

It’s worth noting that MongoDB, Inc. introduced the SSPL to address what they saw as a significant issue: major cloud providers monetizing open-source software without giving back to the community or the original developers. However, the SSPL’s approach to solving this problem has led to debate within the tech community about the best ways to balance open source principles with sustainable business models.

Rethinking Team Dynamics: Balancing Collaboration and Efficiency and Unleashing Individual Potential

For simpler tasks, working individually may prove more effective. Consider trade-offs between collaboration and efficiency.

A recent study questions teamwork’s efficiency, revealing social biases and “herding” effects impacting collective intelligence. “Social loafing” and limited learning opportunities in groups can hinder performance.

It calls to mind Fred Brooks’ work vis-a-vis “Mythical Man Month”, one of the cornerstone texts in IT/System delivery: “Brooks discusses several causes of scheduling failures. The most enduring is his discussion of Brooks’s law: Adding manpower to a late software project makes it later.”

Brooks identifies:
“Group intercommunication formula: n(n − 1)/2.
Example: 50 developers give 50 × (50 – 1)/2 = 1,225 channels of communication.”

Equally, teams burn out. Often one individual at a time. This can be bolstered by swapping in new people but eventually, the team burns out and needs to be refreshed too. I suspect for many people burnout is exasperated by the volume of communication required, especially where they are neurodivergent.

When allocating tasks consider assigning them to the team or individual contributors. And when you do remember to consider making sure they are “Mission Based” objectives.

Link to study: https://theconversation.com/teamwork-is-not-always-the-best-way-of-working-new-study-211693

Link to book: https://en.wikipedia.org/wiki/The_Mythical_Man-Month

Thank you to Professor Amanda Kirby for sharing the research and study on LinkedIn.


Exploring the Concerns Surrounding the Term ‘High Functioning Autism: A Deeper Look into Potential Offensiveness

The term “high-functioning autism” has been criticized by many individuals within the autism community, as well as by advocates and experts, for a variety of reasons. While it is not inherently offensive to everyone, there are several concerns associated with its usage that highlight potential issues.

Continue reading

Looking for a Home: The search for an alternative to “Asperger’s Syndrome”

The task of finding an alternative name for what was formerly referred to as Asperger’s Syndrome is undoubtedly a complex and challenging endeavour. This challenge stems from the intricacies of capturing the essence of a unique cognitive profile within the broader autism spectrum while avoiding any unintended negative connotations or exclusionary subtext.

Continue reading