Lab 3 - Population Growth

Authors

Charles Lehnen

Melissa Guzman

Cameron Egan

Functions

Functions are actions on an input. They generally produce an output. We have already been using them above, like class(), matrix(), list(), and subset().

Object-Oriented Programming

R is an object-oriented language because everything in R is an object.

By definition in programming, an object is an instance of a class. Classes have certain attributes/properties and specific functions, which are referred to as methods. When an object is created in your code, that object inherits the attributes of the class you assigned it to. These attributes also serve as built-in parameters for the associated methods.

There are also subclasses (child classes) and superclasses (parent classes). If the object you create is a member of subclass, it will inherit all attributes of its class as well as its parent classes.

Object-oriented programming is desirable over procedural programming because it is an efficient, non-redundant way to organize and assign attributes to data. Through object-oriented programming you do not have to assign attributes over and over again each time you initialize an object.

Additionally, by using inheritance the same method can operate differently on an object depending on its specific subclass or superclass which increase readability and reduces complexity. For example:
# Create a vector that contains numbers 1 to 10 (`1:10`)
vec <- c(1:10)

# Apply `summary()` to the vector
summary(vec)
   Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
   1.00    3.25    5.50    5.50    7.75   10.00 
# Convert the same vec into a dataframe
df <- as.data.frame(vec)

# Apply `summary()` to the dataframe
summary(df)  # Treats each column separately, including column names
      vec       
 Min.   : 1.00  
 1st Qu.: 3.25  
 Median : 5.50  
 Mean   : 5.50  
 3rd Qu.: 7.75  
 Max.   :10.00  

Whenever possible in your programming career, it is generally better to write code following object-oriented principles over procedural programming principles.

Writing Your Own Basic Functions

Functions must be defined before they can be called. In R, the definition of a function allows you to:

  1. Name the function

  2. List the required inputs as parameters

  3. Define what the function does

The basic format for writing a function is R is:

Function_Name <- function(parameter1, parameter2) {
  c(parameter1, parameter2)
}
Run the above code chunk, what seems to happen?

When we defined the function, nothing seemed to happen. That is because we must call the function in order to run it.

In order to call the function, we must provide the required parameters as arguments.

Function_Name("argument1", "argument2")
[1] "argument1" "argument2"

Now it is your turn! Write a function called Calculate_Mean() that calculates the mean of vector c(1,2,3,4) using the sum() and length() functions within your function definition.

Remember, if you want to check what a function does in R, you can use the help() function or the ?. For example help(length) or ?length().

vector_1 <- c(1,2,3,4)

Calculate_Mean <- function(vector) {
  sum(vector) / length (vector)
}

Calculate_Mean(vector_1)
[1] 2.5
mean(2.5)
[1] 2.5

Functions can also contain default values for parameters following the format:

Function_Name <- function(parameter1, parameter2 = "default") {
  c(parameter1, parameter2)
}

Function_Name("argument1")
[1] "argument1" "default"  

These default values can be overwritten by the provided arguments

Function_Name("argument1", "argument2")
[1] "argument1" "argument2"
Write a function that divides each element of vector c(1,2,3,4) by two using a default parameter value of 2.
Divide_by_Two <- function(vector, denominator = 2) {
  vector / denominator
}

Divide_by_Two(vector_1)
[1] 0.5 1.0 1.5 2.0
Now call the same function again without modifying the function itself, but divide each element by 3
Divide_by_Two(vector_1, 3)
[1] 0.3333333 0.6666667 1.0000000 1.3333333

Function Returns

Note what happens by default when we assign the results of function to a new object inside of a function:

vector_1 <- c(1,2,3,4)

Divide_by_Two <- function(vector, denominator = 2) {
  vector_4 <- vector / denominator
}

Divide_by_Two(vector_1)
vector_4
Error: object 'vector_4' not found

The new object vector_4 is not found outside of the function. This is happening because objects, datasets, and even other functions inside of a function are kept within their own environment.

The base environment is referred to as the “global environment.” The contents of this global environment is displayed in the upper-righthand of Rstudio. We can check that vector_2 is not in the global environment by checking that it is not found in the environment pane. So how do we pass this object from inside the function into the global environment?

R gives you a few options and tries to make this easy for you. Let’s start by clearing the global environment with rm(list=ls()).

rm(list=ls()) 

Now you can see that the environment pane is empty.

Likely the most straightforward way in R to assign the value produced by a function is to assign the value to an object in the global environment when the function is called:
vector_1 <- c(1,2,3,4)

Divide_by_Two <- function(vector, denominator = 2) {
  vector / denominator
}

vector_4 <- Divide_by_Two(vector_1)
vector_4
[1] 0.5 1.0 1.5 2.0
Super Assignment in R
Another method you may see, but should avoid, is making a global assignment with the “super assignment operator” or “global assignment operator” <<-. Even if an assignment is made inside of a function, this assignment will pass the object globally.
rm(list=ls()) 

vector_1 <- c(1,2,3,4)

Divide_by_Two <- function(vector, denominator = 2) {
  vector_4 <<- vector / denominator
}

Divide_by_Two(vector_1)
vector_4
[1] 0.5 1.0 1.5 2.0
This method is dangerous because if the function is run more than once, the object can unintentionally be reassigned each time
Divide_by_Two <- function(vector, denominator = 2) {
  vector_4 <<- vector / denominator
}

vector_4 <- Divide_by_Two(vector_4)
vector_4
[1] 0.25 0.50 0.75 1.00
vector_4 <- Divide_by_Two(vector_4)
vector_4
[1] 0.125 0.250 0.375 0.500
vector_4 <- Divide_by_Two(vector_4)
vector_4
[1] 0.0625 0.1250 0.1875 0.2500
Explicit Return Statements
As you saw above, unlike some other programming language, R does not require you to explicitly include return statements in order to return values from a function. However, you can still explicitly return values with the return() statement.

This is more in-line with explicit programming principles and can allow you to be certain which values are returned instead of allowing R to automatically make this choice for you.
rm(list=ls()) 

vector_1 <- c(1,2,3,4)

Divide_by_Two <- function(vector, denominator = 2) {
  vector_4 <- vector / denominator
  return(vector_4)
}

vector_4 <- Divide_by_Two(vector_1)
vector_4
[1] 0.5 1.0 1.5 2.0
As a side note, you may also these return statements structured without the return() function in R, as another attempt of R to make returns more automatic
Divide_by_Two <- function(vector, denominator = 2) {
  vector_4 <- vector / denominator
  vector_4
}

(Roser and Ritchie 2023)

Models

Models are an abstraction of reality. This means that models exclude much more information than they include. That is a limitation we must keep in mind when drawing conclusions from models, but this is also a very desirable attribute of models.

Imagine you were on a hiking trail and wanted to find the quickest way back to the trailhead. You see a sign-post with a map posted, so you approach it. If that map contained all of the information about every aspect of that area, that would be very unhelpful. Instead the map is designed as an abstraction, presenting only the information the users desire to know about the real world.

In general across fields, models are designed to provide at least one of three utilities to their users. The ability to:

  1. Predict

  2. Explain

  3. Generalize

Which of these three utilities is the trail map designed to provide?

1) Exponential/Density-Independent Population Growth

As you recall this course focuses heavily on community ecology. In lab, we will build gradually to the community level. Today we will start by analyzing interactions within a population.

What is a population? How many species does it include?

A population is a group of conspecifics (one species) that is isolated from other populations of the same species.

In ecology what is a community?

A community is interacting populations of different species in an ecosystem.

1a) Discrete Exponential Population Growth

We will begin our labwork by exploring density-independent population growth. Density-independent population growth occurs when the growth rate of a population size is not affected by the number of conspecifics in that population. Each individual in a population does not affect the fitness of another member of that same population.

Density-independent population growth is relatively simple to model because it can be represented by a simple geometric series and generally involves fewer parameters than other models of population.

However, it is also biologically useful to understand because density-independent population growth can occur in nature.

When in the natural world might you observe density-independent population growth?

When a population:

  1. Is below carrying capacity

  2. Has access to unlimited resources, like in a lab setting

  3. Is experiencing release from anthropogenic pressure

  4. Is experiencing predator release:

    • Via removal of the predators/parasites from the population’s range

    • Enemy release and the founder effect resulting from the population invading another area free from natural predators/parasites

  5. Etc.

Modeling: The Trouble with Tribbles

Let’s model a scenario of density-independent population growth together. In Star Trek the hairy, featureless species called tribbles seem to have an extraordinary population growth rate. They are said to be “born pregnant.” It seems as if their population growth might be density-independent, but let us make a model to see if we think that is true.

Star Trek: The Original Series episode, “The Trouble with Tribbles”, Fair use, Wikimedia

We decide on a transect and monitor the tribbles on the ship. Between generations we observe the following:

Day Number of individuals
1967-12-29 17
1967-12-30 51
1967-12-31 153
1968-01-01 459
1968-01-02 1377

Create vectors from number of tribbles (n) per day (days):

days = c(1:5)
n = c(17, 51, 153, 459, 1377)

Let’s plot the number of tribbles (n) per day (days) using ggplot2(Wickham 2016). Note that in order to use ggplot2 we must have a dataframe data stucture as our input. Use geom_step() and geom_point() for the geoms.

library(ggplot2)
Warning: package 'ggplot2' was built under R version 4.4.2
tribble_df <- data.frame(
  Days = days,
  N = n
)

ggplot(data = tribble_df, aes(x = Days, y = N)) +
  geom_step() +  # This creates the stepped line which shows discrete growth
  geom_point() +  # This adds points
  labs(x = "Days", y = "Number of Tribbles") # Labels for axes

Predicting population size

How can we predict how large we expect the population to be tomorrow? We need to calculate following:

\[ \begin{equation*}\frac{N_{t+1}}{N_{t}}\end{equation*} \]

ratio = n[2:5]/n[1:4]
ratio
[1] 3 3 3 3
Now we can make a generalization, the population seems to be increasing by how many percent per day?

\[ 200\% \] \[ 3n\ =\ n\ +\ 2n \]

The engineers have estimated that the “Aeroponics Bay” can only provide enough oxygen to support 111,537 tribbles. We need to predict how many days until this population size is reached if this growth rate continues.

Using n[5] calculate how many tribbles you predict there will be on day 6 (n_day_6)
n[5]*3
[1] 4131
Using n[5]*3, calculate how many tribbles you predict there will be on day 7
(n[5]*3)*3
[1] 12393

Do you see a trend? We can generalize this trend in the following way:

\[ \begin{array}{l}N_{6} \ =\ 3 N_{5}\\N_{7} \ =\ 3\ ( 3 N_{5})\\N_{8} \ =\ 3\ ( 3\ ( 3 N_{5}))\\\\= >\\\\N_{t} \ =\ 3^{t} N_{0}\end{array} \]

As a general term, we can introduce \(\lambda\) as the finite rate of increase. Note that \(\lambda\) has no unit.

\[ N_{t} \ =\ N_{0} \lambda ^{t} \]

apply() Loops

Because R is built around vectors as its most basic unit, R is very well suited for applying functions across rows or columns of a data structure. However, to help with readability the apply family of functions can be used. They are similar to for loops which are also a useful tool.

Write a function to average the values of each numeric column of our matrix_1 <- cbind(c(6,5,4),c(3,2,1)) without using any loop functions.
matrix_1 <- cbind(c(6,5,4),c(3,2,1))

average_columns <- function(M) { c(sum(M[,1])/length(M[,1]),sum(M[,2])/length(M[,2])) }

# These work as well:
# colMeans(M) 
# colSums(M)/nrow(M)

average_columns(matrix_1)
[1] 5 2

apply()

Now let’s do the same process with the apply() function and the mean function instead of our own function. The apply() function applies any given function to the columns or rows of only multi-dimensional data structures, e.g. matrices, arrays, and dataframes. It returns output in the same data structure as the input.

Following the help file, here is the structure: apply(X, MARGIN, FUN, …, simplify = TRUE), where MARGIN = 1 is for rows while MARGIN = 2 is for columns and FUN is any function we want to apply.

apply(matrix_1, MARGIN = 2, mean)
[1] 5 2
Try using apply() to find the square root of each element in our matrix_1
apply(matrix_1, MARGIN = 2, sqrt)
         [,1]     [,2]
[1,] 2.449490 1.732051
[2,] 2.236068 1.414214
[3,] 2.000000 1.000000
Now try using apply() to find the square root of each element in our vector_1
apply(vector_1, MARGIN = 1, sqrt)
Error in apply(vector_1, MARGIN = 1, sqrt): dim(X) must have a positive length

lapply()

The lapply() function performs in a similar way, however it can also be used on 1-dimensional data structures as well, like lists and vectors, whereas apply() would throw an error in those use cases. Because it does not apply to data structures that consist of 1 row, it operates on elements instead of columns/rows. The output is always a list, even if the input is multi-dimensional. For example, it treats a matrix as a list of columns.

Let’s use it to take the square root of each element of vector_1
lapply(vector_1, sqrt)
[[1]]
[1] 1

[[2]]
[1] 1.414214

[[3]]
[1] 1.732051

[[4]]
[1] 2
See what happens when we call lapply() with matrix_1 as an input
lapply(matrix_1, sqrt)
[[1]]
[1] 2.44949

[[2]]
[1] 2.236068

[[3]]
[1] 2

[[4]]
[1] 1.732051

[[5]]
[1] 1.414214

[[6]]
[1] 1

sapply()

sapply() is similar to lapply() in that it can accept a data structure input of 1-dimension. However it does not always return a list, it can return a vector if it is possible.

Let’s see what happens when we use sapply() on vector_1 instead. What’s the difference?
sapply(vector_1, sqrt)
[1] 1.000000 1.414214 1.732051 2.000000

Other apply Family Functions

Here is a summary of what we have covered so far.

There are other apply family functions like mapply() and tapply(), however we will not cover those at this time.

It’s worth mentioning that there are other alternatives to the apply family of functions including map() from the purrr(Wickham and Henry 2023) package as well as rowwise(), colwise(), and across() from the dplyr(Wickham et al. 2022) package.

apply() Family Shorthand Notation

apply(x, function(y) y) is referred to as “shorthand notation” or “shorthand syntax.” Because we are using a placeholder function without even naming it, it is referred to as an “anonymous function” and we can use shorthand to define what the function does. Meanwhile x is the argument and y is the parameter.

Let’s go back to managing our tribble crisis. Recall we found we were able to represent density-independent population growth with the following equation:

\[ N_{t} \ =\ N_{0} \lambda ^{t} \]

Calculate how many more days you predict until the population exceeds 111,537 tribbles. You can use the following sapply() shorthand format that you just learned about:

Nt.s <- sapply(N0, function(n) n * lambda^t)
N0 <- 1377
lambda <- 3
t <- 1:5

Nt.s <- sapply(N0, function(n) n * lambda^t)
Nt.s
       [,1]
[1,]   4131
[2,]  12393
[3,]  37179
[4,] 111537
[5,] 334611
In result, how many more days until you predict the ship will run out of oxygen?

Only 4 days due to the compounding nature of density-independent growth.

The logarithmic scale is useful when analyzing rates. In this example, the logarithmic scale provides a clearer way to visualize this steady trend that may not have been immediately apparent in our earlier plot.

tribble_df_discrete <- data.frame(
  Days = t,
  N = Nt.s
)

# Plot without the logarithmic scale
ggplot(tribble_df_discrete , aes(x = Days, y = N)) +
  geom_step() + 
  geom_point() +  
  labs(x = "Days", y = "Population Size") 

# Now plot with a logarithmic scale by adding the line `scale_y_log()` to the end of your ggplot pipe
ggplot(tribble_df_discrete , aes(x = Days, y = N)) +
  geom_step() + 
  geom_point() +  
  labs(x = "Days", y = "Population Size") +
  scale_y_log10()  # Log transform the y-axis

Effects of initial population size

From our previous analysis, it looks like the ship is in danger. How much of an effect would it have to cull the current population?

Use the following values to determine how much would the population need to be culled today to extend oxygen reserves another full day:

N0.s <- c(N0, 918, 459, 230)
N0.s <- c(N0, 918, 459, 230)
t <- 1:5

Nt.s <- sapply(N0.s, function(n) n * lambda^t)
Nt.s
       [,1]   [,2]   [,3]  [,4]
[1,]   4131   2754   1377   690
[2,]  12393   8262   4131  2070
[3,]  37179  24786  12393  6210
[4,] 111537  74358  37179 18630
[5,] 334611 223074 111537 55890

Initial population size can have a large effect on populations undergoing density-independent growth!

Effects of differing per capita growth rates

So far, we have modeled this scenario with a per capita growth rate of 3. What do you think would happen if the per capita growth rate approached 1? What if it went below 1?

Now, let’s check your guess by plotting the scenarios where \(\lambda\)c(0.75, 1, 1.5) for N0 over the next 5 days.
lambdas <- c(0.75, 1, 1.5)

Nt.s <- sapply(lambdas, function(lambda) N0 * lambda^t)

Nt.df <- data.frame( # We need to convert to a dataframe to work with `ggplot2`
  Days = rep(t, times = length(lambdas)),
  N = as.vector(Nt.s),
  Lambda = rep(lambdas, each = length(t))
)


ggplot(Nt.df, aes(x = Days, y = N, group = Lambda, color = factor(Lambda))) +
  geom_step() +
  geom_point() +
  geom_hline(yintercept = N0, linetype = "dashed") +  # Horizontal line at N0
  labs(x = "Days", y = "N", color = "Lambda")

Observe the plot above. How do the different values of \(\lambda\) affect the population size

\[ \lambda \ > \ 1\ \Longrightarrow \ Positive\ population\ growth\\ \lambda \ =\ 1\ \Longrightarrow \ Population\ size\ constant\\ \lambda \ < \ 1\ \Longrightarrow \ Population\ decline \]

1b) Continuous Exponential Population Growth

In nature, discrete logistic population growth is not as common as continuous logistic population growth. An example of discrete logistic population growth is the growth of a population of annual plants, whereas an example of continuous logistic population growth is the growth of a human population where reproduction occurs continuously.

In general as a convention, \(\lambda\) is used for discrete population growth and \(r\) is used for continuous population growth.

Exponential/Density Independent Logistic/Density Dependent
Discrete

Population grows by a constant proportion λ at each time step (e.g. year).

\[ N \lambda^{t} \]

Population grows by a constant proportion each time step, but the growth rate decreases as the population size approaches the carrying capacity K.

\[ \lambda N(1-\frac{N_{t}}{K}) \]

Continuous

Population grows continuously at an instantaneous rate r at each time step (e.g. year).

\[ Ne^{rt} \]

Population grows continuously (not at discrete time steps), but the growth rate decreases as the population size approaches the carrying capacity K.

\[ rN(1-\frac{N}{K}) \]


Let’s start with plotting discrete exponential population growth again in the same way we did above

N0 <- 1377
lambda <- 3
t <- 1:5

# Create equation for discrete population growth
Nt_discrete <- sapply(N0, function(n) n * lambda^t)


df_discrete <- data.frame(Days = t, Population = Nt_discrete)

ggplot(df_discrete, aes(x = Days, y = Population)) +
  geom_step() +
  geom_point() +
  labs(title = "Discrete Population Growth", x = "Days", y = "Population Size")

What happens when we make the spacing between days smaller? Try it with `t <- seq(1, 5, by = 0.25)`

N0 <- 1377
lambda <- 3
t <- seq(1, 5, by = 0.25)

# Create equation for discrete population growth
Nt_discrete <- sapply(N0, function(n) n * lambda^t)


df_discrete <- data.frame(Days = t, Population = Nt_discrete)

ggplot(df_discrete, aes(x = Days, y = Population)) +
  geom_step() +
  geom_point() +
  labs(title = "Discrete Population Growth", x = "Days", y = "Population Size")

How about even smaller?

N0 <- 1377
lambda <- 3
t <- seq(1, 5, by = 0.01)

# Create equation for discrete population growth
Nt_discrete <- sapply(N0, function(n) n * lambda^t)


df_discrete <- data.frame(Days = t, Population = Nt_discrete)

ggplot(df_discrete, aes(x = Days, y = Population)) +
  geom_step() +
  geom_point() +
  labs(title = "Discrete Population Growth", x = "Days", y = "Population Size")

We are approaching a continuous growth pattern.

We can model continuous exponential population growth directly with the formula:

\[ Ne^{rt} \]

Re-write our code with this equation along with \(r=log(\lambda)\) and geom_line() instead of geom_step().

# Define initial parameters
N0 <- 1377
lambda <- 3
r <- log(lambda)
t <- seq(1, 5, by = 0.01)

# Create equation for continuous population growth
Nt_continuous <- N0 * exp(r * t)

df_continuous <- data.frame(Days = t, Population = Nt_continuous)

ggplot(df_continuous, aes(x = Days, y = Population)) +
  geom_line() +
  labs(title = "Population Size", x = "Days", y = "Population Size")

Compare this to the discrete population growth plot above with small intervals between timesteps. What do you notice?

These curves are very similar!

We found earlier that \(\lambda = 1\) is the threshold between positive and negative population growth. What is the equivalent threshold for \(r\)?

lambda <- 1
r <- log(lambda)
r
[1] 0

\[ \begin{array}{l} No\ Change:\ \lambda \ =\ 1\ ,\ r\ =\ 0\\ Population\ Growth:\ \lambda \ >\ 1\ ,\ r\ >\ 0\\ Population\ Decline:\ \lambda \ < \ 1\ ,\ r\ < \ 0 \end{array} \]

2) Logistic/Density-Dependent Population Growth

2a) Continuous Logistic Population Growth

As we collect more data on the tribble population, we find that it does not match our original predictions following density-independent growth calculations. Instead, let’s explore if there may be something limiting population growth.

Why might this be?
  • Limited space

  • Limited resources (food, water, etc.)

  • Limited necessary habitat

  • Etc.

  • Perhaps limited by their own abundance: density-dependent

Recall from lecture, that we can represent density-dependent growth as

\[ \frac{dN}{dt} \ =\ rN\left( 1\ -\ \frac{N}{K}\right) \]

\(r\) is defined as \(r=ln(\lambda)\)

\(K\) is the carrying capacity of the population. Do you remember from lecture at which values of \(K\) is the population considered stable?

\[ \begin{array}{l}r\ =\ 0\\K\ =\ N\end{array} \]

There can be multiple definitions of stability for a population including:

  1. Resilience - Long term rate of return towards equilibrium

  2. Resistance - Tendency to resist change in population size

  3. Reactivity - Rate of change immediately following disturbance

  4. Permanence - Degree to which a population size does not experience boundary equilibria

Today, we will focus on resilience, the ability of a population size to return to an equilibrium when a disturbance occurs. We can monitor the strength of this attraction back to equilibrium by measuring the rate of attraction. This point of attraction, or the attractor, is the carrying capacity.

There is another equilibrium point at \(N = 0\). When the ship was devoid of all tribbles, the population was at a stable size of 0. However, this is considered the unstable equilibrium point, or repeller, because as soon as this population size was perturbed by the introduction of tribbles, the population size was rapidly repelled away from the unstable equilibrium point and potentially attracted towards the stable equilibrium point of the population carrying capacity.

What is happening at each of these points (a, b, c, d, and e)?

A: No growth; population near zero.

B: Population growing slowly.

C: Maximum growth rate.

D: Stable population (equilibrium).

E: Population decline (resources exceeded).

Now let’s add a carrying capacity to our model. We can do this by adding a term to the equation that represents the carrying capacity, \(K\). Here is the equation from lecture:

\[ rN(1-\frac{N}{K}) \]

For our purposes in lab, this similar equation (logistic growth equation) will make it easier for us to convert this concept into code:

\[ N_{t} = \frac{K}{1 + \left(\frac{K - N_{0}}{N_{0}}\right) e^{-rt}} \]

Add a carrying capacity of 90000 at day +4 to our model. Based on the plot above what would you expect to happen at that value?

# Define initial parameters
N0 <- 1377  
lambda <- 3  
r <- log(lambda)  
K <- 90000  
t <- 1:10 

# Write logistic function
Nt_logistic <- sapply(t, function(t) K / (1 + ((K - N0) / N0) * exp(-r * t)))

# Create dataframe
df_logistic <- data.frame(Days = t, Population = Nt_logistic)

# Plot
ggplot(df_logistic, aes(x = Days, y = Population)) +
  geom_line() +
  geom_point() +
  labs(title = "Logistic Population Growth", x = "Days", y = "Population Size")

What is occurring with these results? How has the shape of our curve changed? What does this shape imply?

We will talk about modeling the population growth of multiple species at once to see how interactions between species affect population growth in the following weeks.

R’s ordinary differential equation (ODE) solver deSolve() can be used to analyze population growth. ODEs are commonly used to calculate rates of change in order to model dynamic systems and we will use them quite a bit throughout the semester.

As a final example, the figure on the left(Stevens 2009) is an example of a simple continuous logistical growth of a population converging at \(K\). The plot on the right(Stevens 2009) shows a simulation of populations converging towards \(K\) at different rates depending on their traits. Regardless of \(N\) or \(r\), the population sizes eventually converge at \(K\).

#install.packages("deSolve")
library(deSolve)
Warning: package 'deSolve' was built under R version 4.4.2
clogistic <- function(times, y, parms) {
  n <- y[1]
  r <- parms[1]
  alpha <- parms[2]
  dN.dt <- r * n * (1 - alpha * n)
  return(list(c(dN.dt)))
}

#We create vectors for the parameters and the initial densities for all of the populations in the model. We also specify the time.
prms <- c(r = 1, alpha = 0.01)
init.N <- c(1)
t.s <- seq(0.1, 10, by = 0.1)

# We load the deSolve library, and run ode. The output is a matrix that includes the time steps and N.
require(deSolve)
out <- ode(y = init.N, times = t.s, clogistic, parms = prms)
plot(out[, 1], out[, 2], type = "l", xlab = "Time", ylab = "N")

outmat <- matrix(NA, nrow = length(t.s), ncol = 20)
for (j in 1:20) outmat[, j] <- {
  y <- runif(n = 1, min = 0, max = 120)
  prms <- c(r = runif(1, 0.01, 2), alpha = 0.01)
  ode(y, times = t.s, clogistic, prms)[, 2]
}


matplot(t.s, outmat, type = "l", col = 1, ylab = "All Populations")


References

Roser, Max, and Hannah Ritchie. 2023. “How Has World Population Growth Changed over Time?” Our World in Data.
Stevens, Martin Henry Hoffman. 2009. A Primer of Ecology with r. Springer.
Wickham, Hadley. 2016. “Ggplot2: Elegant Graphics for Data Analysis.” https://ggplot2.tidyverse.org.
Wickham, Hadley, Romain François, Lionel Henry, and Kirill Müller. 2022. Dplyr: A Grammar of Data Manipulation.
Wickham, Hadley, and Lionel Henry. 2023. Purrr: Functional Programming Tools. https://CRAN.R-project.org/package=purrr.