Library

feed icon rss

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
Filter
  • Global minimization  (3)
  • 1,5-bisphosphate carboxylase  (1)
  • 1
    ISSN: 1432-2048
    Keywords: Ribulose ; 1,5-bisphosphate carboxylase ; Gene mutation (N-terminus) ; Substrate affinity (ribulose-1,5-bisphosphate)
    Source: Springer Online Journal Archives 1860-2000
    Topics: Biology
    Notes: Abstract Mutagenesis in vitro of the gene encoding the large subunit of ribulose-1,5-bisphosphate carboxylase/ oxygenase (EC 4.1.1.39) from Anacystis nidulans was used to generate novel enzymes. Two conserved residues, threonine 4 and lysine 11 in the N-terminus were changed. The substitution of threonine 4 with serine or valine had little effect on the kinetic parameters. The substitution of lysine 11 with leucine, which is non-polar, increased the K m for ribulose-1,5-bisphosphate from 82 to 190 μM but its replacement with glutamine, which has polar properties, had no appreciable effect.
    Type of Medium: Electronic Resource
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 2
    Electronic Resource
    Electronic Resource
    Springer
    Annals of operations research 25 (1990), S. 101-118 
    ISSN: 1572-9338
    Keywords: Global minimization ; non-convex programming ; parallel algorithms
    Source: Springer Online Journal Archives 1860-2000
    Topics: Mathematics , Economics
    Notes: Abstract The global minimization of large-scale partially separable non-convex problems over a bounded polyhedral set using a parallel branch and bound approach is considered. The objective function consists of a separable concave part, an unseparated convex part, and a strictly linear part, which are all coupled by the linear constraints. These large-scale problems are characterized by having the number of linear variables much greater than the number of nonlinear variables. An important special class of problems which can be reduced to this form are the synomial global minimization problems. Such problems often arise in engineering design, and previous computational methods for such problems have been limited to the convex posynomial case. In the current work, a convex underestimating function to the objective function is easily constructed and minimized over the feasible domain to get both upper and lower bounds on the global minimum function value. At each minor iteration of the algorithm, the feasible domain is divided into subregions and convex underestimating problems over each subregion are solved in parallel. Branch and bound techniques can then be used to eliminate parts of the feasible domain from consideration and improve the upper and lower bounds. It is shown that the algorithm guarantees that a solution is obtained to within any specified tolerance in a finite number of steps. Computational results obtained on the four processor Cray 2, both sequentially and in parallel on all four processors, are also presented.
    Type of Medium: Electronic Resource
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 3
    Electronic Resource
    Electronic Resource
    Springer
    Journal of global optimization 3 (1993), S. 79-94 
    ISSN: 1573-2916
    Keywords: Global minimization ; sufficient conditions
    Source: Springer Online Journal Archives 1860-2000
    Topics: Mathematics
    Notes: Abstract A concave function defined on a polytope may have many local minima (in fact every extreme point may be a local minimum). Sufficient conditions are given such that if they are satisfied at a point, this point is known to be a global minimum. It is only required to solve a single linear program to test whether the sufficient conditions are satisfied. This test has been incorporated into an earlier algorithm to give improved performance. Computational results presented show that these sufficient conditions are satisfied for certain types of problems and may substantially reduce the effort needed to find and recognize a global minimum.
    Type of Medium: Electronic Resource
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 4
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical programming 42 (1988), S. 421-448 
    ISSN: 1436-4646
    Keywords: Global minimization ; concave quadratic programming ; parallel algorithms
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract The global minimization of large-scale concave quadratic problems over a bounded polyhedral set using a parallel branch and bound approach is considered. The objective function consists of both a concave part (nonlinear variables) and a strictly linear part, which are coupled by the linear constraints. These large-scale problems are characterized by having the number of linear variables much greater than the number of nonlinear variables. A linear underestimating function to the concave part of the objective is easily constructed and minimized over the feasible domain to get both upper and lower bounds on the global minimum function value. At each minor iteration of the algorithm, the feasible domain is divided into subregions and linear underestimating problems over each subregion are solved in parallel. Branch and bound techniques can then be used to eliminate parts of the feasible domain from consideration and improve the upper and lower bounds. It is shown that the algorithm guarantees that a solution is obtained to within any specified tolerance in a finite number of steps. Computational results are presented for problems with 25 and 50 nonlinear variables and up to 400 linear variables. These results were obtained on a four processor CRAY2 using both sequential and parallel implementations of the algorithm. The average parallel solution time was approximately 15 seconds for problems with 400 linear variables and a relative tolerance of 0.001. For a relative tolerance of 0.1, the average computation time appears to increase only linearly with the number of linear variables.
    Type of Medium: Electronic Resource
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...